playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
MIT_Learn_Differential_Equations | The_Calculus_You_Need.txt | GILBERT STRANG: OK well, here we're at the beginning. And that I think it's worth thinking about what we know. Calculus. Differential equations is the big application of calculus, so it's kind of interesting to see what part of calculus, what information and what ideas from calculus, actually get used in differential equations. And I'm going to show you what I see, and it's not everything by any means, it's some basic ideas, but not all the details you learned. So I'm not saying forget all those, but just focus on what matters. OK. So the calculus you need is my topic. And the first thing is, you really do need to know basic derivatives. The derivative of x to the n, the derivative of sine and cosine. Above all, the derivative of e to the x, which is e to the x. The derivative of e to the x is e to the x. That's the wonderful equation that is solved by e to the x. Dy dt equals y. We'll have to do more with that. And then the inverse function related to the exponential is the logarithm. With that special derivative of 1/x. OK. But you know those. Secondly, out of those few specific facts, you can create the derivatives of an enormous array of functions using the key rules. The derivative of f plus g is the derivative of f plus the derivative of g. Derivative is a linear operation. The product rule fg prime plus gf prime. The quotient rule. Who can remember that? And above all, the chain rule. The derivative of this-- of that chain of functions, that composite function is the derivative of f with respect to g times the derivative of g with respect to x. That's really-- that it's chains of functions that really blow open the functions or we can deal with. OK. And then the fundamental theorem. So the fundamental theorem involves the derivative and the integral. And it says that one is the inverse operation to the other. The derivative of the integral of a function is this. Here is y and the integral goes from 0 to x I don't care what that dummy variable is. I can-- I'll change that dummy variable to t. Whatever. I don't care. [? ET ?] to show the dummy variable. The x is the limit of integration. I won't discuss that fundamental theorem, but it certainly is fundamental and I'll use it. Maybe that's better. I'll use the fundamental theorem right away. So-- but remember what it says. It says that if you take a function, you integrate it, you take the derivative, you get the function back again. OK can I apply that to a really-- I see this as a key example in differential equations. And let me show you the function I have in mind. The function I have in mind, I'll call it y, is the interval from 0 to t. So it's a function of t then, time, It's the integral of this, e to the t minus s. Some function. That's a remarkable formula for the solution to a basic differential equation. So with this, that solves the equation dy dt equals y plus q of t. So when I see that equation and we'll see it again and we'll derive this formula, but now I want to just use the fundamental theorem of calculus to check the formula. What as we created-- as we derive the formula-- well it won't be wrong because our derivation will be good. But also, it would be nice, I just think if you plug that in, to that differential equation it's solved. OK so I want to take the derivative of that. That's my job. And that's why I do it here because it uses all the rules. OK to take that derivative, I notice the t is appearing there in the usual place, and it's also inside the integral. But this is a simple function. I can take e to the t-- I'm going to take e to the t out of the-- outside the integral. e to the t. So I have a function t times another function of t. I'm going to use the product rule and show that the derivative of that product is one term will be y and the other term will be q. Can I just apply the product rule to this function that I've pulled out of a hat, but you'll see it again. OK so it's a product of this times this. So the derivative dy dt is-- the product rule says take the derivative of [INAUDIBLE] that is e to the [INAUDIBLE]. Plus, the first thing times the derivative of the second. Now I'm using the product rule. It just-- you have to notice that e to the t came twice because it is there and its derivative is the same. OK now, what's the derivative of that? Fundamental theorem of calculus. We've integrated something, I want to take its derivative, so I get that something. I get e to the minus tq of t. That's the fundamental theorem. Are you good with that? So let's just look and see what we have. First term was exactly y. Exactly what is above because when I took the derivative of the first guy, the f it didn't change it, so I still have y. What have I-- what do I have here? E to the t times e to the minus t is one. So e to the t cancels e to the minus t and I'm left with q of t Just what I want. So the two terms from the product rule are the two terms in the differential equation. I just think as you saw the fundamental theorem was needed right there to find the derivative of what's in that box, is what's in those parentheses. I just like that the use of the fundamental theorem. OK one more topic of calculus we need. And here we go. So it involves the tangent line to the graph. This tangent to the graph. So it's a straight line and what we need is y of t plus delta t. That's taking any function, maybe you'd rather I just called the function f. A function at a point a little beyond t, is approximately the function at t plus the correction because it-- plus a delta f, right? A delta f. And what's the delta f approximately? It's approximately delta t times the derivative at t. That-- there's a lot of symbols on that line, but it expresses the most basic fact of differential calculus. If I put that f of t on this side with a minus sign, then I have delta f. If I divide by that delta t, then the same rule is saying that this is approximately df dt. That's a fundamental idea of calculus, that the derivative is quite close. At the point t-- the derivative at the point t is close to delta f divided by delta t. It changes over a short time interval. OK so that's the tangent line because it starts with that's the constant term. It's a function of delta t and that's the slope. Just draw a picture. So I'm drawing a picture here. So let me draw a graph of-- oh there's the graph of e to the t. So it starts up with slope 1. Let me give it a little slope here. OK the tangent line, and of course it comes down here Not below. So the tangent line is that line. That's the tangent line. That's this approximation to f. And you see as I-- here is t equals 0 let's say. And here's t equal delta t. And you see if I take a big step, my line is far from the curve. And we want to get closer. So the way to get closer is we have to take into account the bending. The curve is bending. What derivative tells us about bending? That is delta t squared times the second derivative. One half. It turns out a one half shows in there. So this is the term that changes the tangent line, to a tangent parabola. It notices the bending at that point. The second derivative at that point. So it curves up. It doesn't follow it perfectly, but as well-- much better than the other. So this is the line. Here is the parabola. And here is the function. The real one. OK. I won't review the theory there that it pulls out that one half, but you could check it. Now finally, what if we want to do even better? Well we need to take into account the third derivative and then the fourth derivative and so on, and if we get all those derivatives then, all of them that means, we will be at the function because that's a nice function, e to the t. We can recreate that function from knowing its height, its slope, its bending and all the rest of the terms. So there's a whole lot more-- Infinitely many terms. That one over two-- the good way to think of one over two, one half, is one over two factorial, two times one. Because this is one over n factorial, times t to the nth, pretty small, times the nth derivative of the function. And keep going. That's called the Taylor series named after Taylor. Kind of frightening at first. It's frightening because it's got infinitely many terms. And the terms are getting a little more comp-- For most functions, you really don't want to compute the nth derivative. For e to the t, I don't mind computing the nth derivative because it's still e to the t, but usually that's-- this isn't so practical. [INAUDIBLE] very practical. Tangent parabola, quite practical. Higher order terms, less-- much less practical. But the formula is beautiful because you see the pattern, that's really what mathematics is about patterns, and here you're seeing the pattern in the higher, higher terms. They all fit that pattern and when you add up all the terms, if you have a nice function, then the approximation becomes perfect and you would have equality. So to end this lecture, approximate to equal provided we have a nice function. And those are the best functions of mathematics and exponential is of course one of them. OK that's calculus. Well, part of calculus. Thank you. |
MIT_Learn_Differential_Equations | Solution_for_Any_Input.txt | PROFESSOR: OK. Finally, I'm going to solve this first order linear differential equation with a formula that works for any source term. So we've solved it for specific, nice, special source terms I'll remember later. But now we want a formula for the solution to that equation, period. And we want to understand the formula. So now, write the formula down, and then let's see why it's right. And then of course, we could put it into the equation and confirm that it's right. OK the formula is going to be-- this is the big formula you could say, for first order linear equations. So y of t. First we have the result of the amount-- I'm thinking of this as balance. The money in the bank is why it's increasing at this rate, because of interest being added. And it's increasing at this rate because of new deposits being added. And oh, maybe I should say about those deposits, I'm not thinking of like deposit once a year, or once a month, or once a minute, even. Deposit continuously. We're talking differential equations here. Time is running all-- the clock is always running. So you're-- theoretically, at least-- you're depositing at a rate of so much per second, all the time. OK. So first, the y of 0, that's a once in a lifetime deposit to start the account. And it grows, as we know, with the interest rate as the exponent. The question is-- so that's the null solution that matches the initial condition. No solution, because there are no deposits here. This solves this part of the equation, the null part. But now I want to add in a particular solution, and it'll be the particular solution that matches the deposits. And I want to understand this formula. So here's the formula. I'm going to-- the deposits each deposit goes in at time T, goes in at some time, and then it grows. Once you've made the deposit, it's going to grow exponentially. With [INAUDIBLE], it will grow over the remaining time. So here's the formula. You make deposits at any time, s, between s equal 0 and s equal t. So s is the running clock. T is the-- we look at our account and see what's in it. And this deposit is made at time s, and then it grows over the remaining time, from s to t. So it grows by a factor e to the a, t minus s. That's the key. And now we add up all those deposits with their growth. So that addition for continuous time addition is integration. That's the whole idea of the integral, is add up continuously. So there's my formula that I'm hoping you will admire. The no solution that grows out of the initial condition. The particular solution that grows out of the source term, q. Of course I've used q of t all the way. Here I call it q of s. I have to introduce a integration variable s, which goes from the start of these deposits until the current time, and grows like that. So that's the formula. I could put a big box around it. Let me start a box anyway. I could check that it's correct. But I hope you see why it's right. And let me make some comment on the examples of q where we didn't have this formula. We just went for it directly. So we started with, these were special. Especially nice, you could say. Q of t equal a constant. That was the first video. Then we did q of t equal an exponential. That was the second one. Then we did-- so and then we found the exponential response. Then I did oscillation. Cosine-- well, let me put it over here-- cosine of omega t. Or plus sine of omega t. You remember, those two kind of had to come together. We couldn't stick with cosines alone, because the derivative of a cosine here would give us a sign, so signs got into the picture. So we found the formula for that, that took a little more work. And in fact we did it by a real method and a complex method. And, now are there any other nice functions in calculus? Well the next video, I'm going to tell you about two more functions that I think are very nice. Step function. So the deposit-- we don't make any deposits up until some time, and then we start, then we go change to constant. So the step function is 0 and then a constant. And also-- and this is the especially interesting one-- a delta function. And what is a delta function? Which doesn't always come into the basic differential equations, course, but it belongs there. Because in a model of reality, a delta function is like a golf club hitting a golf ball. In an instant something happened. Or a baseball bat hitting a baseball. It gives it an instant velocity. It's an impulse. So step function is like a light switch off and then on. Delta function is all in one instant, an impulse, and you'll see that coming. And then those are the-- oh, and maybe I could include-- let's see, if I've got a constant, maybe should include t, t squared, so on. Powers of t are not too bad. We could get-- So for all of these special ones that I call the nice functions. For all of those-- and we could multiply t times e to the st, that would still be nice. And we could have a simple formula. Those are the ones that give simple, direct, interesting, answers in themselves. And this is the one that gives the general answer. And if I put q be any one of those in this general formula, I'll get the special one. You can see how if I put a constant in there, that will be exactly what we found at the very, very beginning for the response to a constant. Right. So this is the general expression. I feel I should say more about it. I guess I should say, two more things I want to add about this general formula. One is, I should check that it's correct. But I hope you saw, it had to be right. This input went in, it grew, everything was linear. So I could just add the separate growth, separate results, to find out what the balance was at the final time, t. But still, I can check it. And I can derive it. That step is always often done by what's called an integrating factor. So there will be a quick video that shows you how an integrating factor leads to that formula. Right in this video, I'm just saying I was led to it by common sense. But I can check that it's correct. So let me check that it's correct. OK. This part I can see is correct. So I'd like to work with that one and show that that is a particular solution to the differential equation. Can I do that? I want to show that this-- I'm looking now at this y particular, and I'll factor out the e to the at, because it doesn't depend on s. It's not involved in the integration. So this is in the integral from s equals 0 to s equal t of e to the minus as, that has an s in it. Q of s, ds. This is all the stuff that depends on what time the deposit was made, time s. And what time we're looking at the balance, the later time, t. OK. This is a product of one term times another term. And when I put it into the differential equation, I'll use the product rule. So the derivative of that product will have two terms from the product rule. And those two terms will be-- if all goes well-- will be the two terms in the differential equation. So can I take the derivative of this by the product rule? So I take the derivative of the first thing, so now I'm computing dy, dt, by ordinary calculus. So the derivative of that is a e the at times the second term. That's just ay. That's a times the y that we had before. Because the derivative of e to the at, by the chain rule, brings down an a, and we have that. But now the product rule says also I must take that term times the derivative of this term. And that looks messier. What's the derivative of that function? It's a function of t. But it's an integral. It's a function of t. What's it's time derivative? That's the last piece of the product rule. Well, look what it is. It's the integral up to time t of some function. And the fundamental theorem of calculus says that the derivative of the integral is the original function, right? So the derivative of this integral is the original function that we integrated at time t. So it's e to the minus a t cubed t. That's the second term from the product rule. And OK, this was the ay, perfectly. And what do I have here? E to the at cancels e to the minus at, and it's the source term q of t. So I have ay plus q of t, the correct right hand side for the differential equation. So this, that formula, let me bring it down once more, is sort of the climax of this beginning of the videos to see first order linear differential equations. So what's coming now are step and delta function. I just want to speak about those. And that takes a few minutes separately. And then another step will be to allow the interest rate to change. We made our problem simple because we've kept a constant interest rate. So I'll let the interest rate change. And then after that comes the real step to non-linear equations. So delta functions, varying interest rate, and then non-linear equations. And then on to second order equations and all the rest of the theory of differential equations. Good. Thank you. |
MIT_Learn_Differential_Equations | The_Matrix_Exponential.txt | GILBERT STRANG: OK. We're still solving systems of differential equations with a matrix A in them. And now I want to create the exponential. It's just natural to produce e to the A, or e to the A t. The exponential of a matrix. So if we have one equation, small a, then we know the solution is an e to the A t, times the starting value. Now we have n equations with a matrix A and a vector y. And the solution should be, at time t, e to the A t, times the starting value. It should be a perfect match with this one, where this had a number in the exponent and this has a matrix in the exponent. OK. No problem. We just use the series for e to the A t. We plug in a matrix instead of a number. So the identity, plus A t, plus 1/2 A t squared, plus 1/6 of A t cubed, forever. It's the same. It's the exponential series. The most important series in mathematics, I think. And it gives us an answer. And that answer is a matrix. Everything here, every term, is a matrix. OK. Now, is that the right answer? We check that by putting it into the differential equation. So I want to put that solution into the equation. So I need to take the derivative. The derivative of this is the derivative of-- that's a constant. The derivative of that is A. The derivative of this is 1/2. I have an A squared, and I have a t squared. The derivative of t squared is 2t, so that'll just be a t. The 2 and the 2 cancel. OK. Now I have A cubed here. t cubed? The derivative of t cubed is 3t squared, so I have a t squared. And the 3 cancels the 3 and the 6, and leaves 1 over 2 factorial, and so on. And I look at that. And I say it's very much like the one above. Look. This series is just A times this one. Multiply the top one by A. A times I is A. A times A t is A squared t. Term by term, it just has a factor A. So it's A e to the A t, is the derivative of my matrix exponential. It brings down an A. Just what we want. Just what we want. So then if I add a y of 0 in here, that's just a constant vector. I'll have a y of 0. I'll have a y of 0 here. When I put this into the differential equation, it works. It works. Now, is it better than what we had before, which was using eigenvalues and eigenvectors? It's better in one way. This exponential, this series, is totally fine whether we have n independent eigenvectors or not. We could have repeated eigenvalues. I'll do an example. So for with repeated eigenvalues and missing eigenvectors, e to the A t is still the correct answer. Still the correct answer. But if we want to use eigenvalues and eigenvectors to compute e to the A t, because we don't want to add up an infinite series very often, then we would want n independent eigenvectors. So what am I saying? I'm saying that this e to the A t-- All right, suppose we have n independent eigenvectors. And we know that that means, in that case, a is V times lambda times V inverse. And we can write V inverse because the matrix V has the eigenvectors. This is the eigenvector matrix. If I have n independent eigenvectors, that matrix is invertible. I have that nice formula. And now I can see what is-- e to the A t is always identity plus A. I'm now going to use the diagonalization, the eigenvectors, and the eigenvalues for A. So I'm doing the good case now, when there are a full set of independent eigenvectors. Then the A t is V lambda V inverse t. That's right, that's I, plus A t, plus 1/2 A t squared. Right? So I need A squared. So everybody remembers what A squared is. A squared is V lambda V inverse, times V lambda V inverse. And those cancel out to give V lambda squared V inverse, times t squared, and so on. You remember this A squared, so I'll take that away. And look at what I've got. Look what I've got it. Yes. Factor V out of the start, and factor V inverse out of the end. And in here I have V times V inverse is I, so that's fine. V times V inverse, I have a lambda t. V and a V inverse, so I have a 1/2 half lambda squared t squared. And so on, times V inverse. This is all just what we hope for. We expect that a V goes out at the far left at the front. This V inverse comes out at the far right. And what do you see in the middle? You see-- so this is now my formula for e to the A t, is V. And what do I have there? I have the exponential series for lambda t. So it's e to the lambda t V inverse. And what is e to the lambda t? Let's just understand the matrix exponential. When the matrix is diagonal, the best possible matrix, this will be V. What does my matrix look like? V inverse. If I'm looking at this, looking at this. Lambda is diagonal. All these matrices are diagonal with lambdas. So that'll be e to the lambda 1t down to e to the lambda nt. I'm not doing anything brilliant here. I'm just using the standard diagonalization to produce our exponential from the eigenvector matrix and from the eigenvalues. So I'm just taking the exponentials of the n different eigenvalues. So e to the A t. This would lead to e to the A t y at 0, would be-- y of 0 is some combination. And then there's an e to the lambda 1t coming from here. And there's an x eigenvector x1, plus C2 e to the lambda 2t x2, so on. That's the solution that we had last time. That's the solution that using eigenvalues and eigenvectors. Now. Can Can I get something new here? Something new will be, suppose there are not a full set of n independent eigenvectors. e to the A t is still OK. But this formula is no good. That formula depends on V and V inverse. And suppose we have an example. So all that is very nice. That's what we expect. But we could have a matrix like this one. A equals-- well, here's an extreme case. What are the eigenvalues of that matrix? It's a diagonal matrix. The eigenvalues are 0 and 0. The eigenvalue of 0 is repeated. It's a double eigenvalue. And we hope for two eigenvectors, but we don't find them. That has only one line of eigenvectors. It only has an x1 equals 1, 0, I think. If I multiply that A, times that x1, gives me 0 times x1. That's an eigenvector. Well, because the eigenvalue is 0, I'm looking for the null space. There is in the null space, but the null space is only one-dimensional. Only one eigenvector. Missing an eigenvector. Still, still, I can do e to the A t. That's still completely correct. That series will work. So to do this series I need to know a squared. So I'm actually going to use the series, but you'll see that it cuts off very fast. a squared, if you work that out, it's all 0's. So our e to the A t is just I, plus A t, plus STOP. A squared is all 0's. A cubed is all 0's. So the matrix e to the A t is identity, a times t. a is this, times t is going to put a t there. There you go. That's a case of the matrix exponential, which would lead us to the solution of the equations. Of course, it's a pretty simple exponential. But it comes from pretty simple equations. The equations dy dt, that system of two equations, with that matrix in it. Our system of equations is just dy1 dt, I have a 1 there so it would be a y2. And dy2 dt is 0 on the second row. Well, that's pretty easy to solve. In fact, this tells you how to solve-- you could naturally ask the question, how do we solve differential equations when the matrix doesn't have n eigenvectors? Here's an example. This matrix has only one eigenvector. But the equation that we just solved by, you could say, back substitution. This gives Y2 equal constant. And then that equation, dy1 dt equal that constant, gives me y1 equals t times constant. That's what I'm seeing. Oh. Yeah. Are you surprised to see a t show up here? Normally I don't see a t in matrix exponentials. But in this repeated case, that's the t that we're always seeing when we have repeated solutions. Everybody remembers that when we have second-order equations, and we have the two exponents are the same. So we only get one solution of that, e to the st. And we have to look for another one. And that other one is? te to the st. It's that same t there. OK. There is an example of how a matrix with a missing eigenvector, the exponential pops a t in. The exponential pops a t in. And if I had two missing eigenvectors, then in the exponential. Shall I just show you an example with two missing eigenvectors? Let a be-- well, here it would be 0, 0, 0, 0, 0, triple 0, with, let's say. There's a matrix with three 0 eigenvalues, but only one eigenvector. So it's missing two eigenvectors. And I would, in the end, in e to the A t here, I would see probably 1, 1, 1, t, t, and probably I'll see a 1/2 t squared there. A little bit like that. But one step worse. Because the triple eigenvalue, well, that's not going to happen very often in reality. But we see what it produces. It produces a t squared as well as the t's. OK. So, the x matrix exponential gives a beautiful, concise, short formula for the solution. And it gives a formula that's correct, even in the case of missing eigenvectors. Thank you. |
MIT_Learn_Differential_Equations | Second_Order_Equations.txt | GILBERT STRANG: OK, it's time to move on to second order equations. First order equations, we've done pretty carefully. Second order equations are a step harder. But they come up in nature, they come in every application, because they include an acceleration, a second derivative. OK, so this would be a second order equation, because of that second derivative. I'm often going to have constant a, b, and c. We have enough difficulties to it without allowing those to change. So a, b, c constants, and doing the null solution to start with. Later, there'll be a forcing term on the right-hand side. But today, for this video, null solutions. And the point is, now, what's new is that there are two null solutions. So y null will be a combination of e-- and they're both exponentials. Having constant coefficients there means exponentials in the solution. So e to the sum exponent, and another one to a, hopefully, different exponent. Sometimes if s1 equals s2, that'll be a special case and we'll have a slight change. But this is typical. So we have two constants, c1 and c2, in the null solutions. And we need two initial conditions to determine those constants. So previously, for a first order equation, we were given y of 0. Now, when we have acceleration, we give the initial velocity, y prime of 0. May I use prime as a shorthand for derivative? y prime of 0 is dydt, at 0. So two condition will, at the right time, determine those two constants. And just to say again, so second derivative will be y double prime. And that represents, in physics, it represents acceleration-- change in velocity, change in y prime, is y double prime. And in a graph of a function, y double prime shows up in bending of the graph. Because bending is a change in slope. Bending is a change in the slope. And the slope is y prime, the first derivative. So to measure changes in y prime, which will bend the graph, my chalk would be a tangent line. But if that changes, that gives us a y double prime, a second derivative. OK, so I'm ready for some examples. And the first example-- the most basic equation of motion in physics and engineering, I would say-- it's called harmonic motion. And b is 0, that's the key point. b is 0. It's Newton's law. And so a will be the mass m. y double prime, second derivative. b is 0. Later, b will be a damping term, a friction term, a resistance term. But let's have that 0. So we're going to have perpetual motion, here. Plus the force-- so this is Newton's law. ma is f, f equal ma. The force is proportional-- with a minus sign, so it's going to come on this side as a plus-- proportional to y. There's the equation. No y prime term. my double prime, plus ky equals 0. Starting from an initial position, and an initial velocity, it's like a spring going up and down, or a clock pendulum going back and forth. And we'll see it, so it's going to be-- well, we want to solve that equation. So do we see solutions to that? So if m and k were 1, suppose m and k were 1. I'm looking for a second derivative plus the function is 0. Second derivative is minus the function. I immediately think of sine t, and cosine t. Sines and cosines, because the second derivative-- the first derivative of a sine is cosine. The second derivative is minus the sine. It gives us the minus sign here, the plus sign there, 0. So the special solutions here are y is-- this is the null. I'm finding the null solution. Is, let me call them c1 times a cosine. And now I have to figure out the cosine of what? I want the cosine to satisfy, to be a null solution, satisfy my equation. Let me put it in. If there's a square root of k over mt. And you have to see that if I take two derivatives of the cosine, that will produce minus the cosine, which I want. And because of that, the chain will bring out this square root twice. So it will bring out the factor k over m for y double prime. And that factor, k over m, the m's will cancel and I'll have the k that matches that k. It's a solution. And the other solution is just like it. It's the sine. Sine of this square root of k over m, t. That's worth putting a box around. That's what I mean by free harmonic motion. Something is just oscillating. In rotation problems, something is just going around a circle at a constant speed. And notice, these are not the same as those. Cosines are related to exponentials, but not identical. So I could write the answer this way, using cosine and sine. Or as you'll see, I can write the same formula using exponentials, complex exponentials. Everybody remembers the big formula that allows complex numbers in here is Euler's formula, that the exponential of i omega t is the cosine plus i times the sine of omega t. I'll write it again. That's the solution. It's got two null solutions. They're independent. They're different. And we've got two constants. Because our equation is linear, we can safely multiply by any constant, and add solutions, and they stay solutions because we have a linear equation, and 0 on the right-hand side. Good. Of course, we can't write this square root of k over m forever. Let me do what everybody does-- introduce omega. It's omega natural. The n here stands for the natural frequency, the frequency that that clock is going at. And that is the square root of k over m. So our equation, we could rewrite that equation. Let me rewrite that equation to make it simple. I'll divide by m. No problem, to divide by m. So then I have y double prime, plus k over m, which is omega n squared, the natural frequency squared. y equals 0. Let's put a box around that one, because you couldn't be better than that. The constant a is 1. The constant b is 0. The constant c is a known omega n squared, depending on the pendulum itself. OK, and the solutions then, I'll just copy this solution. y null is c1 cosine of omega nt. Of course, omega n is that square rood. And c2 sine of omega nt. Oh, well, wait a minute. I can figure out what c1 and c2 are , coming from the initial conditions, right? The initial conditions, if I plug in t equals 0, then I want to get the answer y of 0. The known initial condition, the place the pendulum started swinging from, the place the spring, you pull the spring a distance y of 0. You let go. At t equals 0, so I'm plugging in t equals 0, at t equals 0, that's 0. Forget the sine. This is 1. So I discover that c1 should be y of 0. Simple. c1 is y of 0. Because that gives me the right answer, at t equals 0. And what about c2? Can I figure out c2? Well, that's going to involve the initial velocity, the derivative, at t equals 0. Because the derivative of the sine is the cosine, which equals 1 at t equals 0, the derivative of this is the sine, which is 0. So that when I'm looking at y prime, the derivative, I'm looking here at t equals 0. And I want y prime of 0. But I don't just want y prime of 0. Do you see that that doesn't have the right derivative, at t equals 0? Because when I take the derivative and omega n-- that constant, you remember that constant-- the derivative of this will bring out an omega n. So I better have an omega n down here to cancel it. And now I've got it. That tells me the motion, forever and ever. Energy is constant. Potential energy plus kinetic energy, I could speak about energy. But I won't. That motion continues forever, free harmonic motion. OK, it goes on and on. OK. And again, I could write this in terms of complex exponentials. But I'm pretty happy with that form. It's hard to beat that form. OK, so what else to do here. First of all, we're going to have cosine omega t's for quite awhile. I better draw the graph of that simple, familiar function. And so here's a graph of cosine omega t. So here's t. Here's cosine of omega. Here's 0. Here's 2-- well, let me see what I get. So I go cosine of omega, cosine of omega t is what I'm drawing, not cosine t. Cosine t would go 0 to 2 pi. But I have cosine of omega t. So cosine of omega t is what I want to graph. So it starts at 1, and it comes back. It drops, comes back up, comes back to 1. But what is this t final? The period, this t is the period of the oscillation. It's the time it takes for the swing to go up and back. And what is that? That would be-- so I'm graphing cosine of omega t, is what I'm graphing. So omega t starts out at 0, where t is 0. And it goes up to-- so I want omega t, when I get here, this omega t should be 2 pi. Right? Then I've completed the cosine, one rotation around the circle, one movement of the pendulum back and forth, is in that picture now. OK, so omega t is 2 pi, right? Right. The period is t. I think of omega as the circular frequency. Omega is in radians per second. That's the units. And units truly are important to keep track. Omega, this is omega, radians per second, when I multiply by the period, the t in seconds, I get 2 pi radians. OK now, in engineers and in everyday use, there's another frequency called f, for frequency probably. OK, so you should know about f. And its frequency is in hertz. So f is measured in hertz, H-E-R-T-Z, named after the guy. Not the tomato ketchup guy, but-- oh, that's Heinz anyway. Sorry about that. Hertz -- not the car, that's what I was trying to say, but the German guy who was involved, early, with this stuff. So what is f? f times t is 1. Instead of dealing with 2 pi, which count to radians, the 1 just counts complete loops, complete oscillations. So f compared to know omega, f is smaller by a factor 2 pi. So f times t is 1. f is 1 over t. Omega is 2 pi over t. So putting these together, all I'm saying is, omega is 2 pi times f. So when we say we're getting 60 cycles-- so that's what I would measure tf in. f would be in cycles, is cycles per second. One cycle, 2 pi radians. This isn't big, heavy math. But it's more important than a lot of math, just to get these letters straight. So there's capital T, the period, and two measures of frequency. One is omega in radians per second, and the other is f in full cycles per second. So one is 2 pi times the other. Good. OK. And we have this. OK. I think we've got the key ideas here, then, for unforced motion, pure oscillation going on forever. And let me just write what I already mentioned. A different way to express yn would be with small c's, e to the i omega nt. And c2 e to the minus i omega nt. All I'm saying is that this form, with exponentials, is entirely equivalent to this form, with cosine and sine. That's form allows me two constants, capital C1 and C2. This form allows me two constants, little one and c2. And from that, I have this. From this, I have this. So we really do have exponentials here. And the key message is that for pure oscillation, those exponentials are pure imaginary exponent, i omega nt. OK, that's the best example, the simplest example, the first example. Thanks. |
MIT_Learn_Differential_Equations | Overview_of_Differential_Equations.txt | GILBERT STRANG: OK. Well, the idea of this first video is to tell you what's coming, to give a kind of outline of what is reasonable to learn about ordinary differential equations. And a big part of the series will be videos on first order equations and videos on second order equations. Those are the ones you see most in applications. And those are the ones you can understand and solve, when you're fortunate. So first order equations means first derivatives come into the equation. So that's a nice equation that we will solve, we'll spend a lot of time on. The derivative is-- that's the rate of change of y-- the changes in the unknown y-- as time goes forward are partly from depending on the solution itself. That's the idea of a differential equation, that it connects the changes with the function y as it is. And then you have inputs called q of t, which produce their own change. They go into the system. They become part of y. And they grow, decay, oscillate, whatever y of t does. So that is a linear equation with a right-hand side, with an input, a forcing term. And here is a nonlinear equation. The derivative of y. The slope depends on y. So it's a differential equation. But f of y could be y squared over y cubed or the sine of y or the exponential of y. So it could be not linear. Linear means that we see y by itself. Here we won't. Well, we'll come pretty close to getting a solution, because it's a first order equation. And the most general first order equation, the function would depend on t and y. The input would change with time. Here, the input depends only on the current value of y. I might think of y as money in a bank, growing, decaying, oscillating. Or I might think of y as the distance on a spring. Lots of applications coming. OK. So those are first order equations. And second order have second derivatives. The second derivative is the acceleration. It tells you about the bending of the curve. If I have a graph, the first derivative we know gives the slope of the graph. Is it going up? Is it going down? Is it a maximum? The second derivative tells you the bending of the graph. How it goes away from a straight line. So and that's acceleration. So Newton's law-- the physics we all live with-- would be acceleration is some force. And there is a force that depends, again, linearly-- that's a keyword-- on y. Just y to the first power. And here is a little bit more general equation. In Newton's law, the acceleration is multiplied by the mass. So this includes a physical constant here, the mass. Then there could be some damping. If I have motion, there may be friction slowing it down. That depends on the first derivative, the velocity. And then there could be the same kind of forced term that depends on y itself. And there could be some outside force, some person or machine that's creating movement. An external forcing term. So that's a big equation. And let me just say, at this point, we let things be nonlinear. And we had a pretty good chance. If we get these to be non-linear, the chance at second order has dropped. And the further we go, the more we need linearity and maybe even constant coefficients. m, b, and k. So that's really the problem that we can solve as we get good at it is a linear equation-- second order, let's say-- with constant coefficients. But that's pretty much pushing what we can hope to do explicitly and really understand the solution, because so linear with constant coefficients. Say it again. That's the good equations. And I think of solutions in two ways. If I have a really nice function like a exponential. Exponentials are the great functions of differential equations, the great functions in this series. You'll see them over and over. Exponentials. Say f of t equals-- e to the t. Or e to the omega t. Or e to the i omega t. That i is the square root of minus 1. In those cases, we will get a similarly nice function for the solution. Those are the best. We get a function that we know like exponentials. And we get solutions that we know. Second best are we get some function we don't especially know. In that case, the solution probably involves an integral of f, or two integrals of f. We have a formula for it. That formula includes an integration that we would have to do, either look it up or do it numerically. And then when we get to completely non-linear functions, or we have varying coefficients, then we're going to go numerically. So really, the wide, wide part of the subject ends up as numerical solutions. But you've got a whole bunch of videos coming that have nice functions and nice solutions. OK. So that's first order and second order. Now there's more, because a system doesn't usually consist of just a single resistor or a single spring. In reality, we have many equations. And we need to deal with those. So y is now a vector. y1, y2, to yn. n different unknowns. n different equations. That's n equation. So here that is an n by n matrix. So it's first order. Constant coefficient. So we'll be able to get somewhere. But it's a system of n coupled equations. And so is this one with a second derivative. Second derivative of the solution. But again, y1 to yn. And we have a matrix, usually a symmetric matrix there, we hope, multiplying y. So again, linear. Constant coefficients. But several equations at once. And that will bring in the idea of eigenvalues and eigenvectors. Eigenvalues and eigenvectors is a key bit of linear algebra that makes these problems simple, because it turns this coupled problem into n uncoupled problems. n first order equations that we can solve separately. Or n second order equations that we can solve separately. That's the goal with matrices is to uncouple them. OK. And then really the big reality of this subject is that solutions are found numerically and very efficiently. And there's a lot to learn about that, a lot to learn. And MATLAB is a first-class package that gives you numerical solutions with many options. One of the options may be the favorite. ODE for ordinary differential equations 4 5. And that is numbers 4, 5. Well, Cleve Moler, who wrote the package MATLAB, is going to create a series of parallel videos explaining the steps toward numerical solution. Those steps begin with a very simple method. Maybe I'll put the creator's name down. Euler. So you can know that because Euler was centuries ago, he didn't have a computer. But he had a simple way of approximating. So Euler might be ODE 1. And now we've left Euler behind. Euler is fine, but not sufficiently accurate. ODE 45, that 4 and 5 indicate a much higher accuracy, much more flexibility in that package. So starting with Euler, Cleve Moler will explain several steps that reach a really workhorse package. So that's a parallel series where you'll see the codes. This will be a chalk and blackboard series, where I'll find solutions in exponential form. And if I can, I would like to conclude the series by reaching partial differential equations. So I'll just write some partial differential equations here, so you know what they mean. And that's a goal which I hope to reach. So one partial differential equation would be du dt-- you see partial derivatives-- is second derivative. So I have two variables now. Time, which I always have. And here is x in the space direction. That's called the heat equation. That's a very important constant coefficient, partial differential equation. So PDE, as distinct from ODE. And so I write down one more. The second derivative of u is the same right-hand side second derivative in the x direction. That would be called the wave equation. So this is like the first order equation in time. It's like a big system. In fact, it's like an infinite size system of equations. First order in time. Or second order in time. Heat equation. Wave equation. And I would like to also include a the Laplace equation. Well, if we get there. So those are goals for the end of the series that go beyond some courses in ODEs. But the main goal here is to give you the standard clear picture of the basic differential equations that we can solve and understand. Well, I hope it goes well. Thanks. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Logic_1_Propositional_Logic_Stanford_CS221_AI_Autumn_2019.txt | All right. So let's get started. [NOISE] So today's lecture is going to be on logic. Um, to motivate things, I wanna start with, uh, hopefully an easy question. So if X_1 plus X_2 is 10, and X_1 minus X_2 is 4. What is X_1? Someone shout out the answer once you figure it out. 7. So how did you come to get 7? Yeah. You do the algebra thing that you learned a while ago, right? [NOISE] Um, so what's the point of this? Um, so notice that this is, uh, a factor graph where we have two variables, they're connected by two constraints or factors. And you could in principle go and use backtracking search to try different values of X_1 and X_2 until you eventually arrive at the right answer. But clearly, this is not really an efficient way to do it. And somehow in this problem, there's extra structure that we can leverage to arrive at the answer in a much, much easier way. And this is kind of the- going to be the poster child of what we're gonna explore today and on next Monday's lecture, how you can do logical inference to arrive at answers much faster than you might have otherwise. [NOISE] So we've arrived at the end of the- of the class, um, and I wanna just, uh, [NOISE] reflect a little bit on what we've learned, and maybe this will be also a good review for the exam. Um, so in this class, we've boast- bolstered everything on the modeling, um, inference learning paradigm. And the picture you should have in your head is this. Abstractly, we take some data, we perform some learning on it, and we produce a model. And using that model, we can produce a perform inference which looks like taking in a question and returning an answer. So what does this look like for all the different types of instantiations we've looked at? So for search problems, the model is a, is, is a search problem and the inference asked the question; what is the minimum cost path? Um, in MDPs and games, we asked the question what is the maximum value policy? In CSPs, we asked the question [NOISE] what is the maximum weight assignment? And in Bayesian networks, we can answer probabilistic inference queries of the form, what is the probability of some query variables conditioned on some evidence variables? And for each of these case, we looked at the modeling, we look at the, the inference algorithms, and then we looked at different types of, uh, learning procedures going backwards, maximum likelihood. Uh, we looked at, uh, various reinforcement learning algorithms. We looked at structured perceptron and so on. And hopefully, this, this kind of sums up, um, the kind of the worldview that CS221 is trying to impart is that there are these different co- you know, components. Um, and depending on what kind of modeling you choose, you have different types of algorithms, um, and learning algorithms. Inference algorithms and learning algorithms that emerge. Okay? So we looked at several modeling paradigms, um, roughly broken into three categories. The first is state based models, search problems, MDPs, and games. And here, um, the, the way you think about modeling is in terms of states, and as nodes in a graph and actions that take you between different states, um, which incur either a cost or give you some sort of reward, and your goal is just to find paths, or contingent paths, or policies, um, in these graphs. Then we shifted gears to talk about variable based models. Where instead we think about variables, um, and factors that constrain or, um, these variables to, uh, take on certain types of values. Um, so in today's lecture, I'm gonna talk about logical based models. So we're gonna look at propositional logic and first-order logic which are two different types of logical languages or, um, models. And, um, we're gonna instead think about logical formulas and inference rules which is going to be another way of kind of thinking about, uh, modeling the world. Historically, logic was actually the dominant paradigm in AI, um, before the 1990s. So it might be hard to kind of believe now. But just imagine the amount of excitement that is going into deep learning today. This equal amount, uh, of excitement was going into logical based methods in, in AI in, uh, um, in the '80s and before that too. Um, but there was kind of two problems with logic. One is that, um, logic was deterministic so it didn't handle uncertainty very well, and [NOISE] that's why probabilistic inference and other methods were developed to address this. And it was also rule-based which allow- didn't allow you to naturally ingest a large amount of data to, you know, uh, guide behavior, and the emergence of machine learning has addressed this. Um, but one strength that kind of has been left on the table is the expressiveness. And I kind of emphasize that logic as you will see gives you, um, the ability to express very complicated things in a very, you know, succinct way. And that is kind of the main point of, uh, logic which I really want everyone to, um, kind of appreciate, and hopefully this will become clear through examples. Um, as I motivated on the first day of class, the reason- uh, one, one good way to think about why we might want logic is, imagine, you wanna lie on a beach, um, and [NOISE] you want your assistant to be able to do things for you but, um, hopefully it's more like, um, Data from Star Trek rather than Siri. Um, you wanna take an assistant and you want to be able to at least tell information and ask your questions, and have, um, these questions actually be answered in response- eh, to, to reflect the information that you've, uh, [NOISE] told them. Um, so just kind of a brief refresher on the first day of class. I showed you this demo where you can talk to the system, [NOISE] uh, um, and say things and ask a question. So a small example is, um, let's say all students like CS221. It's great and teaches them important things. Um, and, uh, Alice does not like CS221. And then you can ask, um, you know, ''Is Alice a student?'' And the answer should be, ''No'', because it can kind of reason about this one. [NOISE] Um, and just to, uh, dive under the hood a little bit, um, inside, it has some sort of knowledge-based stack contains the information at, um, it has, we'll come back to this in a second. Okay? Um, so this, this, uh, system needs to be able to digest heterogeneous information in the form of natural language, you know, uh, utterances, and it has to reason deeply with that information. So it can't just do, you know, superficial pattern matching. So I've kind of suggested natural language as an interface to this. Um, and natural language is very powerful because, um, I can send up here and use natural language to give a lecture and hopefully you guys can understand at least some of it. Um, and- but, you know, let's, let's go with natural language for now. So here- here's an example of how you can draw inferences using natural language. Okay? So a dime is better than a nickel. Um, a nickel is better than a penny. So therefore, a dime is better than a penny, okay? So this seems like pretty sound reasoning. Um, so what about this example, a penny is better than nothing, um, nothing is better than world peace. [inaudible]. Therefore, a penny is better than world peace, right? Okay. So something clearly went, uh, wrong here. And this is because, languages- natural language is kind of slippery. It's not very precise, which makes it very easy to kind of make these, um, these mistakes. Um, but if we step back and think about what is the role of natural language, it's really- language itself is an ex- mechanism for expression. So there are many types of languages. There's natural languages, um, there's programming languages, which all of you are, you know, familiar with. Um, but we're going to talk about, a different type of language called logical languages. Um, like programming languages, so they're gonna be formal. So we're gonna be absolutely clear what we mean by when we have a statement in a logical language. Um, but, and like natural language, it's going to be, um, declarative. Um, and this is maybe a little bit harder to appreciate right now, but it means that, uh, there's kind of a more of a, um, one to one isomorphism between logical languages, and natural languages, as they're compared to, um, programming languages and natural language. Okay. Um, so in a logical language, we want to have two, uh, properties. First, a logical language should be, uh, rich enough to represent knowledge about the world. Um, and secondly, it's not sufficient just to represent the knowledge, because you know, uh, a hard drive can represent the knowledge, but you have to be able to use that knowledge in a- in a way to reason with it. Um, a logic contains three, uh, ingredients, um, which I'll, um, go through in a- in subsequent slides. There is a syntax, which defines, um, what kind of expressions are valid or grammatical in this language. Um, there's semantics which is for each, um, expression or formula. What does it mean? And mean means is- is actually, means something very precise which I'll, you know, come back to. And then inference rules allow you to take various f- uh, formulas and, um, do kind of operations on them. Just like in the beginning, we- when we have the, um, algebra pro- problem, you can add equations, you can move things to different sides. Um, you can perform with these rules, which are syntactic manipulations on these formulas or expressions. That preserves some sort of, um, semantics. Okay, so just to, uh, talk about syntax versus semantics a little bit, because, I think this might be a s- slightly subtle point which, um, hopefully will be clear with this example. So syntax refers to, what are the valid expressions in this language, um, and semantics is about what these ex- expressions mean. So here is an example of two expressions, which have different syntax. 2 plus 3 is not the same thing as 3 plus 2, but they have the same semantics. Both of them mean, the number, you know, 5. Um, here's a case where, we have two expressions with the same syntax, 3 divided by 2, but they have different semantics, betw- depending on which language you're in. Okay? So in order to define a language precisely, you not only have to specify the syntax, but also the, um, semantics. Because just by looking at the syntax you don't actually know what its, um, meaning is. Unless I tell you. Um, there's a bunch of different logics. Um, the ones highlighted in bold are the ones I'm gonna actually talk about in this class. So today's lecture is going to be,um, on propositional logic. Um, and then, uh, in the next lecture, I'm gonna look at first-order logic. Um, as with most models in general, there's going to be a trade off between, the expressivity and the computational efficiency. So as I go down this list, to first-order logic and beyond, um, I'm going to be able to express more and more things, using the language. But it's gonna be harder to do, uh, computation in that language. Okay. So this is the- the kind of a key diagram, um, to have in your head, while I go through syntax semantics and inference rules. So for every- I'm gonna do this for propositional logic, and then in, uh, Monday's lecture I'm gonna do it for, uh, first-order logic. Um, so just to get them on the board, um, we have syntax, and we have, um, semantics, and then we have inference rules. Um, let's just write it here. Um, so this lecture is going to have a lot of definitions and concepts, um, in them. Just to giv- give you a warning. There's a lot of kind of ideas here. Um, they're all very kind of, uh, um, simple by themselves and they kinda piece together, but there's just gonna kind- kinda be a barrage of, uh, terms and I'll try to write them on the board, so that you can kind of remember them. Um, so in order to find, uh, a logic called language, um, I need to specify what are the formulas. Um, um, so one maybe other comment about logic is that, some of you have probably taken, um, CS 103 or an equivalent class where you have been exposed to propositional logic. Um, what I'm gonna do here, is kind of a much more methodological and- and rigorous treatment of it. Um, the- I wanted to distinguish the difference between, um, being able to do logic yourself. Like if I give you some, uh, logical expression you can manipulate it. That's different than, um, talking about a general set of algorithms, that can operate on logic itself. Right. So remember in AI, we're not interested in you guys doing logic, because that's just I. That's intelligence. Um, [LAUGHTER] but we're interested in developing general principles, or general algorithms that can actually do the wo- work, uh, for you. Okay? Just like in, uh, in the Bayesian networks, it's very fine and well that you can- you guys can, uh, manipulate and condition, uh, calculate conditional and marginal probabilities yourself. But the whole point is we devise algorithms like Gibbs sampling, and variable eliminish- elimination that can work on any, uh, Bayesian network. Just wanna get that out there. Okay. So let's, uh, begin. Um, this is gonna be building from the ground up. So first of all, there are in propositional logic, there are a set of propositional symbols. These are typically going to be uppercase letters or even, um, words, Um, and these are the atomic formulas. Um, these are formulas that can't be any smaller. There is going to be logical connectives such as, uh, not and or, um, implication and bidirectional implication. And then, the set of formulas are built up recursively. So if F and G are formulas, then I can- these are also formulas. I can have not F. I can have F and G, F or G, F implies G and F, um, bidirectional implication G or equivalent to G. Okay? So key, um,you know ideas here are, we have, um, propositional symbols, um, we're gonna mo- move this down, because we're gonna run out of space. Um, so this- these are things like A, there's- that gives rise to formulas in general, um, which is gonna be denoted, um, F. Um, and so here are some examples. So A is a formula. Okay? It's in particular, it's an atomic formula which is a propositional symbol, not A is a formula, not B implies C as a formula. This is a formula. Um, this is the formula. Double negation is fine. This is not a formula, because there's no connective between A and not B. Um, this is also not a formula, because what the heck is plus? It's not a connective. So- so I think in- in thinking about logic, you really have to divorce yourself from the common sense, that you all come with in interpreting these symbols. Right? Not is just a symbol or is just a symbol, and they don't have any semantics. In- in fact, I can go and define some semantics, which would be completely different from what you would imagine. It would be a lo- valid logical, um, system. These are just symbols, and all I'm here at defining is what symbols are valid and what symbols are not valid, slash grammatical. Okay? Any questions about the syntax of propositional logic? So the syntax gives you the set of formulas or basically statements you can make. So you can- think about as- as this is our language. If we could only speak in propositional logic, I could say, A or not B or the or, um, A implies C. And that's all I would be able to say. Um, and of course now I have to tell you what do these things mean. Okay? And this is the realm of semantics. So semantics, there's gonna be a number of definitions. So first, is a model. So this is really unfortunate and confusing terminology. But this is standard in the logical literature. So I'm just gonna use it. So a model, which is different from our general notion of a model. Um, for example, um, Hidden Markov Model for example. It- a model here, in propositional logic, just refers to an assignment of, um, truth values to propositional symbols. Okay? So if you have three propositional symbols, that they are 8 possible models. Um, A is, uh, 1, B is 0, C is 0 for example. So these are just complete assignments that we saw from factor graphs. But now, um, in this new kind of language. Okay. So that's the first constant. And in first-order logic models are going to be more complicated. Um, but for now, you think about them as a complete sentence. And I'm using W, because sometimes you also call them, um, worlds. Um, because a complete assignments slash a model, is both to represent the state of the world at any one particular point in time. Yeah. [inaudible] Yeah. So the question is, can each propositional symbol either be true or false? And in logic, as I'm presenting it, yes. Only true or false or 0 or 1. Okay? So these are models. Um, and next is a key thing that actually defines the semantics, which is the interpretation function. So the interpretation function, um, takes a formula and a model and returns true if that formula is, uh, is true in this model and false, um, you know, otherwise. Okay? So I can make the interpretation function whatever I want, um, and that just gives me the semantics. So when I talk about what are the semantics, it's the interpretation function. Function i of f, w. [NOISE] So the way to think about this is, um, I'm gonna represent formulas as these, uh, horizontal bars. Okay? So this is- think about this as, uh, a thing you'd say. It sits outside though, uh, reality in so- in some sense. And then this box, I'm gonna draw on the space of all possible models. So think about this as a space of situations, uh, that we could be in the world, and a point here corresponds to a particular model. So an interpretation function takes one, a formula, takes, um, a model and says, "Is this statement true if the world looks like this?" Okay? So just to ground this out, um, a little bit more, um, I'm gonna define this for propositional logic, again recursively. Um, so for propositional symbols, um, p, I'm just gonna interpret that propositional symbol as a lookup in, uh, the, the model, right? So if I'm asking, "Hey, is A true?" Well, I go to my, uh, model and I see well, does it say A is true or false. Okay? That's a base case. So recursively, I can define the interpretation of any formula in terms of its sub formulas. And the way I do this is suppose I have two formulas f and g and they're interpreted in some way. Okay. And now, I took a formula let's say f and g. Okay? So what is the interpretation of f and g, um, in w? Well, it's given by this truth table. So if f is 0 and g is, uh, interpreted to be 0, then f and g is also interpreted to be 0. And, um, 0,1 maps to 0, 1, 0 maps to 0 and 1, 1 maps to 1. So you can verify that this is kind of, um, your intuitive notion of what and should be, right? Um, or is, um, 1 if, um, at least one of f and g are 1, um, implication is 1. If f is 0 or g is, uh, is, is 1. Um, if bidirectional implication just means that if f and g evaluate to the same thing, uh, not f is, you know, clearly just, you know, negation of whatever the interpretation of f is. Okay. So this slide gives you the full semantics of propositional logic. There's nothing more to propositional logic and at least the definition of what it is, um, aside from this. Um, let me go through example and then I'll maybe take questions. So, so let's look at this formula, not A and B, bidirectional implication C. Um, in this model A is 1, B is 1, C is 0. Um, how do I interpret this formula against this model? Well, I look at the, the tree, um, which breaks down the formula. So if I look at the leaves, um, let's start bottom-up. So the interpretation of A against w is just 1. Because for a proposition symbols, I just look up what A is and A is 1 here. Um, the interpretation of not A is 0 because if I look back at this table. If this evaluates to 1, then this evaluates to 0. I'm just looking, um, based on the table. Um, B is 1 just by table lookup, and then, um, not A and B is 0 because I just take these two values and I add them together. Um, C is 0 by table look up and then, uh, bidirectional implication, um, is interpreted as, uh, 1 here because 0 is equal to 0 here. Yeah. Yeah, question? Interpretation function is user defined in this case not like learning how to interpret [inaudible] Yeah, so the question is, is the interpretation function user defined? It is just written down. Um, this is it, there's no learning that's just these are- this is what you get. Um, it's not user-defined in the sense that not everyone's gonna go define their own, [LAUGHTER] you know, truth tables. Um, some logicians came up with this, and that's what it is. Um, but you could define your own logics, and it's kind of a fun thing you could try doing. Okay? Any other questions about interpretation functions and models? So now, we're kind of connecting syntax and semantics, right? So an interpretation function binds what our formulas, which are in syntax land to, um, a notion of models which are, uh, in semantics. So a lot of logic is very, um, it might seem a little bit pedantic but it's just because we're trying to be very rigorous in a way that doesn't need to appeal to your common sense intuitions about of what these formulas mean. Okay? Any questions? All right. So, so while we have the interpretation function that defines everything, it's really going to be useful to think about formulas in a slightly, uh, different way. So we're gonna think about a formula, um, as representing, um, the set of all models for which interpretation is, you know, true. Okay? So M of F, which is the- is the set of models, um, that f is, uh, true on that model. Okay? So pictorially, this is, ah, f that you say out loud. And what you mean by this is, um, simply this subset of models, um, which, uh, this f is true. Okay? So if I make a statement here, what I'm really saying is that I think we're in one of these models and not in one of these other models. So that's a kind of an important, uh, I think intuition to have, the meaning of a- a formula is carving out a space of possible situations that you can be. And if I say there's a water bottle on the table, what I'm really saying is that, I'm ruling out all the possible worlds that we could be in where there is no water table- bottle on the table. Okay? So models, um, M of f is gonna be a subset of all the possible models in the world. Okay? So here's an example. So if I say it's either raining or wet- rain or wet, then the set of models can be represented, um, by this, um, subset of this two-by-two. Okay? So over here, I have rain, over here I have wet. So this corresponds to no rain but it's wet outside, [NOISE] um, this corresponds to it's raining but it's not wet outside. And the set of models f is this red region which are these three possible models. Okay? So I'm gonna use this kind of pictorial depiction throughout this, uh, lecture. So hopefully, this makes sense. So, so one key idea here- remember, I said that logic allows you to express very complicated and large things by using very small means. So here, I have a very small formula that's, um, able to represent a set of models, and that set of models could be exponentially large. And much of the power of logic allow- it comes from the ability to, um, do stuff like that. Okay. So, um, here's yet another definition. This one's not somehow, um, such a new definition- um, sorry, a new concept but it's kind of just trying to give us a little bit more intuition for what these, um, formulas and, um, uh, models are doing. So knowledge base is just a set of formulas. Um, and, and think about this as the set of facts you know about the world. So this is what you have your head. And in general, it's going to be a- just a set of formulas and now the- the key thing as I need to connect us with semantics. So I'm gonna define the set of models denoted by a knowledge base to be the intersection of all of the models denoted by the formulas. So in this case, if I've rain or snow being this, uh, green ellipse and traffic being this red ellipse, then the model is denoted by the knowledge base is just the intersection. Okay? So you can think about knowledge as how fine-grain, uh, we've kind of zoomed in on where we are in the world, right? So initially, if you don't know anything, we say anything is possible all 2 to the n, you know, models are possible. And as you add formulas into your knowledge base, the set of, um, possible worlds that you think might exist, uh, are possible is going to shrink, um, and, you know, we'll see that in a sec- in a second. Okay. So here's an example of a knowledge base. If so have rain that corresponds to this set of models, um, rain implies wet corresponds to the set of models. And if I look at the models of this knowledge base, it's just going to be the intersection which is this, uh, red square down here. Okay? Any questions about knowledge bases, models, interpretation functions so far? All right. So as I've alluded to earlier, knowledge base is the thing you have in your head. And as you go through life, you're going to add more formulas to your knowledge base. You're gonna learn more things. Um, so your knowledge base just gets, uh, union with whatever formula you- you have. And over time, the set of models is going to shrink because you're just taking your intersection. Um, and one question is, you know, how much is this sh- uh, shrinking? Okay? So here, there's a bunch of different cases to contemplate. Um, the first case is, um, entailment. So suppose this is your knowledge base so far, and then someone tells you, um, the formula that corresponds to this, ah, set here. Okay. So in this case, um, intuitively, f doesn't add any information or new constraints that was known before. And in particular, if you take the intersection of these two, you end up with exactly the same set of models you had before so you didn't learn anything. Um, and this is called entailment. So, um, so there's kind of three notions here, there's entailment which is written this way with two horizontal, uh, bars, um, which means that the set of models, um, of f is at least as large as the set of models in, uh, KB or in another words, it's a superset. Okay? Um, so for example, rain and snow. If you already knew it was raining or snowing and someone tells you, "Ah, it's snowy." Then you say, "Well, um, duh, I didn't learn anything." Um, a second case is contradiction. [NOISE] So if you believe the world to be in here or somewhere and someone told you it's actually out here, then, um, your brain explodes, right? Um, so this is where the set of models of the knowledge base on the set of models denoted by the formula is empty. So this- it doesn't make sense. Okay? So that's a contradiction. Um. Okay, so if you knew it was rainy and snowing and someone says it's actually not snowing, then you, you- right, know that, that can't- that can't be right. Okay. So the third case is, um, basically everything else. It's contingency where, um, f adds a non-trivial amount of information to a knowledge base. So the, the new set of models- the intersection here is neither empty nor is it the original knowledge base. Okay? One thing to kind of uh- not get confused by is, if the set of models were actually strictly inside and the knowledge base, that would also be a contingency. Right, because when you intersect it, it's neither empty nor the original. Okay. So if you knew it was rainy, it's, um- raining and someone said, "Oh, it's also snowing too." Then you're like, "Oh cool, I learned something." Okay, so there- there's a relationship between contradiction and entailment. So contradiction says that the knowledge base Mf, um, are- have zero intersection, and entailment, um, this is equivalent to, um, the knowledge base entailing not f. Okay so just, there's a simple proposition that says, um, KB contradicts f if KB- if and if KB entails not f. Okay so, um, there's the picture you should have in here is like not f is all of the models which are not in this. And if you think about kind of, wrapping that around, it kind of looks like this. Okay, [BACKGROUND] all right. So with these three, um, notions: entailment, contradiction, and contingency which are relationships between a knowledge base and a new formula, we can now go back to our kind of virtual assistant example and think about how to implement, um, these operations. So if I have a knowledge base and I tell, um, the- the virtual assistant of a particular, um, formula f, there are three possible abilities which correspond to different appropriate responses. So if I say it's raining, then, if it's in entailment, then I say, "I already knew that because I didn't learn anything new." If it's a contradiction, then I should say, "I don't believe that because it's not consistent with my knowledge so far." And otherwise, you learn something new. Okay, um, there's also the ask operation which if you're asking a question, again, the same three, uh, entailment, contradiction, and contingency can hold. Uh, but now, the responses are- should be answers to this question. So if it's entailment, then I say yes. And this is a strong yes and it's not like, uh, probably yes, this is a definitely yes. If it's a contradiction, then I say no. It's- again, it's a strong no, it's impossible. And then in the other case it's just contingent which you say I don't know, okay. So the answer to a yes or no question, there's three responses, not two, okay. Okay, any questions about this? Okay, how many of you are following along just fine? Okay, good. All right. So this is a little bit of a digression and it's going to connect to Bayesian networks. So you might be thinking in your head, well, we kind of did something like this already, right? In Bayesian networks, we had these complete assignments and we actually defined joint distributions over complete assignments. And- and now, what we're talking about is not distributions but sets of assignments or models. And so we can actually think about the relation between a knowledge base and an f also having an analog in a Bayesian network land given by this formula. So, um, remember, a knowledge base is a set of models or possible worlds. So in probabilistic terms, this is an event. Um, and that event has some probability. So that's- that's a denominator here. And when you look at f and KB and you intersect them, you get some other, uh, event which is a subset of that and you can ask for the probability mass of that, uh, intersecting event, that's the numerator here. And if you divide those, that actually just gives you the probability of a formula, uh, given the knowledge base. Okay, so this is actually a kind of pretty nice and direct, um, um, probabilistic generalization of propositional logic. Uh, yeah. It's only once if you had like all the var- all the variables required for that, for me they're already exists a set of networks like in this scenario, there's A, B, C, if we were asking something about D, would still be in I don't know, because we don't have that information. Yeah so the question is this only works when restricted to the set of predefined, uh, propositional symbols and you are asking about D, then yeah, you would say I don't know. And it's- in fact, when you define propositional logic, you have to pre-specify the set of symbols that you are dealing with, um, in general, yeah. [inaudible] given the example that we did earlier, like reading wasn't given a set of examples are things that our agent knew about before we started like training. So is that something we'll get to later? Um, yes so the question is in the- in the- in practice, you could imagine giving you an agent like it is raining or it's snowing or it's sleeting, and having novel concepts. Um, it is true that you can devise, build systems and the system I'm showing you has that capability. Um, um, and this is, uh, it'll be clear how we do that when we talk about inference rules because that allows you to operate directly on the syntax. Um, here, I'm talking about semantics where you essentially just, just for convenience. I mean you can be smarter but- but we're just defining the- the world. Yeah. Yeah, so in this formula, why is this union not intersection? So I'm unioning the KB with a formula, which is equivalent to intersecting the models of the KB with the models of the formula. Okay. So this is a number between 0 and 1. And this reduces actually to, uh, the logical case if, if this probability of 0, um, that means there's a [NOISE] contradiction, right? Because this intersection is- it's gonna be pros- probability 0. And if it's 1, that means in its entailment. Um, and the cool thing is that instead of just saying I don't know, you're gonna actually give a probabilistic estimate of like, well, I don't know, but it's probably like 90%. Um, so, you know, we're not gonna talk, uh, this is all I'm gonna say about probabilistic extensions to logic, but there are a bunch of other things, um, that you can do that kind of marry the, um, the expressive power of logic with some of the more advanced capability of handling uncertainty of probabilities. Yeah? [inaudible]. Assuming that, I mean, the probability distribution? Yeah. To do this, you are assuming that we actually have the joint distribution at hand. And a separate problem is of course, learning this. So for logic, I'm only talking about inference, I'm not gonna talk about learning although there are ways to actually in- infer logical expressions too. Okay. So back from the digression now, no probabilities anymore. We're just gonna talk about logic. Um, there's another concept which is, um, really useful called, uh, satisfiability. [NOISE] Um, and this is going to allow us to implement, um, entailment, contradiction, contingency using kind of one, um, primitive. [NOISE] And the definition is that knowledge base is satisfiable if, um, the set of models is non-empty, okay? So it's not self-contradictory in other words. So now, we can reduce ask and tell to satisfiability, okay? Um, remember, ask and tell have three possible outcomes. If I ask a satisfiable question, how many possible outcomes are there? Two? So how am I gonna make this work? I have to probably called satisfiable twice, okay? So let's start with asking if knowledge-based union, um, not f is satisfied or not, okay? So if the answer is, um, no, what can I conclude? So remember, the answer is no. So it's not satisfiable, which means that KB contradicts not f, and what is that equivalent to saying? [inaudible]. Sorry? Like it's- like it's [inaudible]. Um, yeah so it's not f. So it's- which one of these should have been entailment, contradiction, or contingency? [inaudible]. Yeah, so, um, I'm interested in the relationship between KB and f. [NOISE] I'm asking the question about KB Union not f. Yeah? [inaudible]. Yeah, yeah. So exactly- so this should be an entailment, relation between KB and f. Remember, KB entails f is equivalent to KB contradicting not f, okay? Um, okay, so what about, um, if it's yes, then I can ask another question, is KB Union f satisfiable or not? So the answer is no, then what should I say? [inaudible]. It should be a contradiction because, I mean, this literally says KB contr- contradicts f. And then finally, if it's yes, then it's contingency, okay? So this is a way in which you can def- reduce answering ask and tell, which is basically about assessing entailment, contradiction, or contingency to just- uh, to- at most two satisfiability calls. So why are we reducing things to satisfiability? For propositional logics, uh, checking satisfiability is just the classical SAT problem and it's actually a special case of solving constraint satisfaction problems. Um, and the mapping here is, we just call propositional symbols variables, um, mo- formulas constraints, and we get an assignment here and we call that a model. So in this case, if we have a knowledge base, um, like this, then there are three variables; A, B and C and we define this CSP and then we can, um, if we find a satisfying assignment, then, um, then we return satisfiable. If we can't find one, then we return unsat, okay? Um, so this is called model checking. Um, it's called model checking because we're checking whether a model, eh, exists or is, uh, um, is true. So you- in model checking takes a knowledge base and outputs whether there is a satisfying model. Um, there are a bunch of algorithms here which are, you know, very popular, there's something called DPLL named after, uh, four, four people. Um, and this is essentially backtracking search plus, uh, pruning that takes into account the, the structure of your CSPs, um, which are propositional logic formulas. Um, and, uh, there's something called WalkSat which is, you can think about the closest analog that we've seen as Gibbs sampling which is a- a randomized local search. Um, okay? So at this point, you really can't have all the ingredients you, uh, you need to do, um, inference and propositional logic. So to define what propositional logic symbols are, um, or formulas are, I define the semantics, um, and I've told you even how to solve, uh, entailment and, um, contradiction and contingency queries by reducing the satisfiability which is actually something we've already, you know, seen coincidentally, um, so that should be it, okay? Um, but now, coming back to the, you know, original motivation of X_1 plus X_2 equals 10, and how we were able to perform their logical query much faster, we can, uh, ask the question now, can we exploit that, that the factors are, are formulas rather than arbitrary, you know, functions, and this is where inference rules is gonna come into play, okay? So, um, so I'll try to explain this, uh, figure a little bit since it looks probably pretty mysterious, um, from the beginning. So I have a bunch of formulas. This is my knowledge base over time, I accrue formulas. And these formulas carve out, um, a set of models in semantics land. And, and this formula here, if it's, um, a superset, that means it's entailed by these formulas, right? So I know that this is true given my knowledge, which means that this is kind of a logical consequence of what I know, okay? So, so far, what we've talked about is taking formulas and doing everything over in semantics land. What I'm gonna talk about now is inference rules that are gonna allow us to directly operate on the syntax and hopefully get, um, some results that way. Okay. So here is an example of making an inference. Um, so if I say it is raining, and I tell you if it's raining, it's wet, um, rain implies wet, then what should you be able to conclude? It's wet. It's wet, right? So I'm gonna write this inference rule this way with this, um, kind of fraction looking like thing where there's a set of premises which is a set of formulas which I know to be true. And if those things were true, then I can derive a conclusion which is another formula. This is, um, an instance of a general rule called Modus ponens, um, that says for any propositional symbols p and q, if I have p and p implies q in my knowledge base, then I can- that entails, uh, q, okay? So let's talk about inference rules, [NOISE] um, inference. Actually, let me do it over here, since we're gonna run out of space otherwise. [NOISE] Okay. So Modus ponens is a first thing we're gonna talk about. [NOISE] So notice here, that if I could do these type of inferences, it's much less work, right? Because I- it's very localized. All I have to do is look at these three formulas. I don't have to care about all the other formulas or propositional symbols that exist, and going back to this question over here about- or how do I- what happens if I have new concepts that occur? Well, you can just treat everything as if it's just a new symbol. There's not necessarily a fixed set of symbols that you're working with at any given time. Okay, so this is the example of inference rule. In general, the idea of the inference rule is that you have rules that say, ''If I see f 1 through f k which are formulas, then I can add g.'' Um, and the key ideas I mentioned before is that in these inference rules operate directly on the syntax, and not on the semantics. So given a bunch of inference rules I have this kind of meta algorithm, that can do a logical inference as follows. So I have a set of inference rules, and I'm just going to repeat until there's no changes to the knowledge base. I choose a set of formulas from the knowledge base. Um, if I find a matching rule, inside rules, that exists, then I simply add g to the knowledge base. Okay? Um, so what the- other definition I'm going to make is, this idea of derives and proved. So, um, so inference rule, um, derives, proves, um, so I'm gonna write KB, and now with a single horizontal line. Um, to mean that from this knowledge base given a set of inference rules I can produce f via the rules. Okay? This is in contrast to entailment which is defined by the relationship between the models of KB and the models of f. Now this is just a function of mechanically applying a set of rules. Okay, so that's a very, very important distinction. And if you think about it, why is it called proof? So whenever you do a mathematical proof or some sort of logical argument, you are in sense, in some sense, just doing logical inference: where you have some premises and then you can apply some rule. For example you can add, um, you know, multiply both sides of the equation by two. That's a rule. You can apply it. And you get some other equation which you can- um, which you hope is true as well. Okay. So here's an example. Um, maybe I'll just for fun I'll do it over here. Um, oops. So I can say it is raining and if I dumped that, it gives me my knowledge base that has rain. Um, if it is raining it is wet. Um, so if I dumped then I have- this is the same as, um, rain implies wet. Okay. Just- just just, uh, in case you're rusty on your propositional logic. If I have P implies Q that's the same as not p or q. Okay? And notice that I have also wet appear in my knowledge base because this in the background it's basically running forward inference to, um, try to derive as many conclusions as we can. Okay. Um, and if I say if it is wet, it is, uh, slippery. Um, Again. Now I have. Um, I have, uh, wet implies slippery. Um, and now I also derived slippery. Um, I also derived rain, um, implies slippery which is actually as you have seen not derivable from modus ponens, so it behind the scenes is actually a much more, uh, fancy inference algorithm. But, um, but- but the idea here is that you have your knowledge base. Um, you can pick up rain and rain implies wet and then you could add wet. And you pick up rai- wet- wet implies slippery and then you can add slippery here. And with modus ponens, um, you can't actually derive somethings. You can't derive not wet, um, which is probably good because it's not true. And you also can't derive rain implies slippery which actually is true but a modus ponens is not powerful enough to derive it. Okay. So- so the burning question you should have in your head is- okay. I talked about there's two relations between a knowledge base KB and a formula f. There is entailment relation. And this is really what you want because this is semantics you all- you care about meaning. Um, and you also have this: KB derives f which is a syntactic relationship. So what's a connection here? In general there's no connection. Um, but there's the kind of these concepts that will help us think about that connection. So semantics these are things which are- Um, when you look at semantics you should think about the models implied by the, um, formulas, and syntax is just some set of rules that someone made up. Okay. So how do these relate? Okay. So to, um, understand this, imagine you have a glass and this glass is um-um, what's inside the glass is formulas. And in particular it's the formulas which are true. Okay. So this glass is all formulas such that the- this formula is entailed by the knowledge base. So, um, soundness is a property of a set of rules. And it says if I apply these rules until the end of time, do I stay within the glass? Am I always going to generate formulas which are inside the glass which are semantically valid or entailed? Okay. So soundness is good. Um, completeness says that the kinda, um, other direction which says that I am going to generate all the formulas which are true for entailed. I might generate extra stuff. But at least I'll cover everything. That's what it means to be complete. Okay. So the model you should have in your head is you want the truth, the whole truth and nothing but the truth. Soundness is really about nothing but the truth and completeness is about the whole truth. Ideally you would want both. Sometimes you can't have both so you're going to have to pick your battles. Uh, but generally, you want soundness. Um, you can maybe live without completeness, um, but if you're unsound, that means you are just going to generate erroneous conclusions, which is, uh, bad. Whereas, if you're incomplete, then maybe you just can't, uh, infer certain f- notions, but at least you- the things that you do infer, you know, are actually true. Okay. So how do we check, um, soundness? [NOISE] So is modus ponens sound? Um, so remember, uh, there's, kind of, a rigorous way to do this. And the rigorous way is to look at two formulas. Rain, rain, uh, implies wet, uh, and then look at their models. Okay. So rain corresponds to these set of models here. Um, rain implies wet and corresponds to this set. Um, and when I intersect them, that's the, the set of models which are conveyed by the knowledge base, which is this corner here. Um, and I have to check whether that is a s- a subset of the models of wet. And wet is over here. So this one-one corner is a subset of one-one and zero-one. So this rule is sound. [NOISE] Remember this, why is this subset of the thing I wanna check? Because that's just the definition of entailment. Right? Okay. So let's do another example. So if someone said it was wet, and you know that rain implies wet, can you infer rain? [NOISE] Well, let's, let's, uh, let's just double-check this. Okay. So again, what are the models of wet? They're here. What is the models of rain implies wet? They're here. And I intersect them, I get this, um, this- these two over here in dark red, and then is that a subset of models of rain? Nope. So this is unsound. Okay. So in general, soundness is actually a fairly, uh, easy condition to check, especially in propositional logic. But in higher-order logics, it's, you know, not as bad. So now completeness is a, kind of, a different story, which I'm not gonna have time to really do full justice in this class. But, um, but here's a, kind of, a, a example showing modus ponens as, um, incomplete, so, um, uh, for propositional logic. [NOISE] Uh, so here we have the knowledge base, some rain, rain or snow implies wet. Um, and is this entailed, wet? So it's raining, and if I know it's raining or snowing, then that should be wet. How many of you say yes? Yeah. It should be entailed, right? Okay. Well, what does modus ponens do? Well, all the rules look like this. Um, so [NOISE], um, clearly you can actually arrive at this with modus ponens because modus ponens can't reason about or, or disjunction. Yeah. Is it possible for it to be right about the rain or snow? Or is it saying that it's not possible for it to be not wet given rain? Uh, is it [NOISE] not? Yeah. Because you're- you already [NOISE] know that it's raining. All right. So you should say that it's wet. Yeah. Okay. So this is incomplete. Um, so we can be, um, you know, sad about this. Uh, there are two ways you can go to fix this. The first way is, um, we say, okay, okay, propositional logic was, um, too fancy. Uh, question? Um, just going back to the [inaudible] - Yeah. -of the notation, when it says KB equals rain, then comma, rain or snow implies wet, is in implying in any type of assignment to rain there? Like, is it saying that it is raining or is it just saying that we have a variable rain? Yeah. So the question is what does this mean? Um, this- so the knowledge base is a set of, uh, formulas. So in this particular formula is rain. And remember, the models of a knowledge base is where are the formulas are true. So yes, in this case it does commit to rain being 1. The- then models of KB are- only include the, um, the models where rain is 1. Otherwise, this formula would be false. Thank you. Yeah. Yeah? [inaudible] the probability of the model, um, as in way back before the [inaudible]? Oh, it was, how can we have a probability over a model? Um, so remember that a model is- um, where did it go? [NOISE] Okay. So remember a model here is just an assignment to a set of, uh, proposition symbols or variables, right? So when we talk about Bayesian networks, um, we're defining a distribution over assignments to all the variables. So here, while I'm saying is that assume there is some distribution over, um, complete assignments to random variables, and I can use that to compute, um, probabilistic queries of the form- of formula given knowledge base. Am I answering your question? Yeah. [inaudible] [NOISE] -shouldn't you- if you have two models that contradict, they can't be in the same knowledge base, right? [NOISE] Um, if you have two models that [NOISE], uh, or [NOISE] formulas that contradict, then this intersection is going to be, uh, 0 [NOISE]. So there is- exists a set of models. So let me do it, um, you know, this way. So imagine you have these two [NOISE] variables, rain and wet. Um, [NOISE] a Bayesian network might assign a probability 0.1, point- um, I should make these sums of one. Um, point, what is this? Five? Um, so some distribution over these states, right? And, um, [NOISE] and if I have [NOISE] rain, [NOISE] that corresponds to these models. So I can write the probability of rain [NOISE] is 0.2 plus 0.5, [NOISE] so 0.7. Okay? And if I have the probability of wet [NOISE], um, given rain, um, this is going to be the probability of the conjunction of these, which is going to be wet and rain, which is here. This is going to be 0.5 divided by the probability of rain, which is 0.7. [NOISE] Does that help? [NOISE] Okay. [NOISE] Whoops. Um, okay. So- okay. So modus ponens is sound, but it's not complete. So there's two things we can do about this. We can either say propositional logic is too fancy. Let's just restrict it so that modus ponens becomes complete with, with respect to a restricted set of formulas. Or we can use more powerful inference rules. So today we're going to restrict propositional logic, um, to make it complete. Um, and then next time we're gonna show how a resolution which is, uh, even more powerful inference rule can be used to make, um, any arbitrary inferences. And this is what's, uh, powering the, the system that I showed you. [NOISE] Okay. So um, a few more definitions. [NOISE] So we're gonna define,um, a propositional logic with, with horn clauses. Okay. So, so a definite clause is, um, [NOISE] a propositional formula of the following form. So you have some propositional symbols, all conjoined together, conjoined just means it's added, um, implies some other propositional symbol q. And the intuition of this, uh, formula says if p_1 through p_k hold, then q also holds. So here's some examples. Rain and snow implies traffic. Um, traffic is can be- is possible. Um, this is a non-example. Um, this is a valid propositional, uh, logic formula, but it's not a valid definite clause. Um, here is also another example. Um, rain and snow implies peace- uh, traffic or peaceful, okay? So this is not allowed because the only thing allowed on the right side- hand side of, um, the implication is a single propositional symbol, and there's two things over here. Okay? So a horn clause is, um, a definite clause or a goal clause, um, which might seem a little bit mysterious. But, um, it's defined as a, [NOISE] um, something that, um, p_1 through p_k implies false. And the way to think about this is the negation of a conjunction of things, right? Because remember, um, p implies q is not p or q. So this would be not p or true, which is not p in this case. Okay? So, so now we have these horn clauses. Um, now, remember the inference rule of modus ponens. Um, we are gonna slightly generalize this to include, um, not just p implies q, but p_1 through p_k implies q. So you get to match on, um, premises which include formulas which are atomic propositional symbols and a rule that, um, looks like this, and you can, uh, derive or prove, uh, q from that. Okay? So as an example, wet and weekday. If you see wet, weekday, wet and weekday implies traffic, those three formulas. Then you can- you're able to add traffic. [NOISE] Okay. So, um, so here's the claim. So modus ponens is complete with respect to horn clauses for propositional logic. So in other words, what this means is that suppose that the knowledge base contains only horn clauses and that, um, p is, some entailed, uh, propositional, uh, symbol. By the entailed propositional symbol, I mean, like, KB actually entails p semantically. Then applying modus ponens will derive p. Means that the two relations are equivalent, and you can celebrate because you have both, uh, soundness and, uh, completeness. Okay. So just a quick example of this. Um, so here imagine is your knowledge base, um, and you're asking the question, uh, is there traffic? And remember, because, um, this is a set of only horn clauses, and we're using modus ponens which is complete, that means, um, entailment is the same as I'm being able to derive it using these rules- this particular rule. Um, and you would do it in the following way. So rain, rain implies wet, gives you wet. Wet, weekday, wet and weekday implies traffic, gives you traffic, and then you're done. Yeah? You are saying wet implies to, uh, like rain and weekday, why are those horn clauses? [NOISE] The question is why are rain and weekday horn clauses? Yes. So if you look at the definition of horn clauses, they're definite clauses. If you look at the definite- definition of definite clauses, they look like this. And, um, k can be 0 here. Which means that, um, there's, um, there's kind of like nothing there. [NOISE] That makes sense? It's a little bit of like- I'm using this notation kind of, um, to exploit this corner case that if you have the n of zero things, then that's just, um, you know, true. Or you can just say by definition, definite clauses contain propositional symbols. Um, that will do too. Okay. So let me try to give you some intuition why modus ponens and horn clauses. So the way you can think about, um, modus ponens is that, it only works with positive, uh, information in some sense. There's no branching, either or. It's like, every time you see this, you just definitively declare q to be true. So in your knowledge base, you're just gonna build up all these propositional symbols that you know are gonna be true. And the only way you can add a new propositional symbol ever is by, um, matching a set of other things which you can definitely know to be true and some rule that tells you q should be true and then you make q true, um, add q to your knowledge base as well. The problem with propositional sym- uh, the more general clauses is, if you look at this, rain and snow implies traffic or peaceful. You can't just write down traffic or, or peaceful, or both of them. You have to reason about, well, it could be either one. And that, um, is outside the scope of what modus, you know, ponens can do. Yeah? [inaudible] and peaceful, that way, you can say like [inaudible]. Yeah, yeah, good, good question. So what happens if it were traffic and peaceful? Um, so this is an interesting case where technically, it's not a definite clause, but it essentially is. Um, [LAUGHTER] so- I don't- there's a few subtleties here. So if you have a implies b and c, you can rewrite that. And this is exactly the same as having two formulas a implies b and a implies c. And these are definite clauses. Um, just like, you know, technically if I gave you, "Hey, what about a not a or b?" It's not a definite clause by the definition, but you can rewrite that as, um, a implies b. So there is, um, slight [NOISE] kind of- you can extend this to say not only definite clauses, but all things which are morally, um, horn clauses, right? Where you can do a little bit of rewriting, um, and then you get a horn clause and then you can do your, you know, inference. Okay? Um, so resolution is this inference rule that we'll look at next time that allows us to deal with these disjunctions. Um, okay. So to wrap up. So today, we talked about logic. So logic has three pieces. We introduced the syntax for propositional logic. There are propositional symbols which you string together into formulas. Um, and over in syntax land, uh, these are given meaning by, um, talking about, you know, semantics. And we introduced an idea of a model which is a particular configuration of world that you can be in. A formula denotes a set of models which are true under that formula, this is given by the interpretation function. Then we looked at, um, entailment contradiction [NOISE] and contingency, which are relations between a knowledge base and a new formula that you might pick up. You could either do satisfiability check or model checking, which tests for satisfiability to solve that, or you can actually do things in, um, syntax land by adjust, uh, operating directly on, um, inference rules. Um, so that's all I have for, uh, today. And I'll see you next Monday. [NOISE]. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Overview_Artificial_Intelligence_Course_Stanford_CS221_Learn_AI_Autumn_2019.txt | All right. Let's get started. Please try to have a seat if you can find a seat and let's, uh, get the show on the road. So welcome everyone to CS221, this is Artificial Intelligence. Uh, and if you're new to Stanford, welcome to Stanford. Um, so first let's do some introductions. So I'm Percy, I'm gonna be one of your instructors. I'm teaching this class with Dorsa over there. So if Dorsa wants to say hi, stand up. Hi guys, I'm Dorsa. Um, I'll be co-teaching [NOISE] this class with Percy. I'm a professor in robotics and robotic interactions. Super excited about teaching this class and [inaudible]. Great. So we're going to be trading off throughout the quarter. And we also have a wonderful teaching team. So these are your CAs. So if all the CAs could stand up and I'll give you each person an opportunity to say three words about what you're interested in. So um, let's start with [inaudible] because you're the head CA. Hello. My name is [inaudible]. I'm a PhD student, and I'm interested in natural language processing. Yay. [LAUGHTER] Hi. My name is [inaudible]. I'm a second year masters student. I'm interested in, um, machine learning and data mining. Hi. I'm [inaudible]. I'm a second year masters student and I'm interested in machine learning and natural language processing. Hi everyone, my name is [inaudible]. masters student and I'm interested in computer vision. [BACKGROUND] [NOISE] Let's go over there. [BACKGROUND] [NOISE] Great. Now, any new TAs in the back? No. Well, um, well, they're all on the slide. Okay. So uh, as you can see, we kind of have a very diverse team and so when you're thinking about kind of final projects later in the quarter, you can tap into this incredible resource. Um, so three quick announcements. Um, so there's going to be a section every week which will cover both kind of review topics and also advanced, uh, uh, topics. So this Thursday there's gonna be an overview. Um, if you're kinda rusty on Python or rusty on probability, come to this and we'll get you up to speed. Um, homework, the first homework is out, it's posted on the website. It's due next Tuesday at 11:00 PM. So remember the time, that matters. Um, all submissions will be done on Gradescope. There's gonna be a Gradescope coast- code that will be posted on, uh, Piazza. So look out for that, um, later. Okay. So now let's, let's begin. So when I first started teaching this class, uh, seven years ago, I used to have to motivate why AI was important and why if you study it you'll have a lot of impact in the world. But I feel like I don't really need to do this. Now it's kind of inescapable that you pick up the news in the morning and you hear something about, you know, AI. And indeed we've seen a lot of success stories, right? AIs that can play Jeopardy or play Go, Dota 2, pro- even poker, all these kind of games at super human level performance. It can also, you know, read documents and answer questions, do speech recognition, uh, face recognition, um, even kind of medical imaging. And all these tasks are, uh, you read about how successful these, uh, technologies have been. Um, and then if you take a look at outside the kind of the technical circles, there's a lot of people, um, in policy, um, and trying to ask what is going on with AI. And you, you hear about, uh, these kind of very, uh, broad claims of how transformative AI will be, um, to the future of work and, um, to society and so on, and even some kind of bordering on, uh, pretty castro- you know, catastrophic consequences. So what's gonna happen in the future, no one knows, but it is fair to say that AI will be transformative. Um, but how do we get here? And to do that, I wanna take a step back to the summer of 1956. So the place was Dartmouth College, John McCarthy, who was then at MIT, and then, uh, after that he founded the Stanford AI Lab, um, organized a workshop at Dartmouth College with, um, some of the best and brightest minds of the time; Marvin Minsky, Claude Shannon, and so on. And they had this not so modest goal of trying to think that every aspect of learning or any feature of intelligence could be precisely captured so that a machine can be just, uh, simulated. So they were after the, the big question of how do you kind of solve, um, AI. So now they didn't make that much, uh, progress over the, the summer, but a lot of programs and interesting artifacts came about from that time. Um, there were programs that could play checkers or prove, uh, theorems, and sometimes even better than what, um, you know, the human proof will look like. Um, and there was a lot of optimism. People were really, really excited, and you can see these quotes by all these excited people who proclaimed that AI would be solved in a matter of years. But we know that didn't really happen and there's this kind of folklore example, um, people are trying to do machine translation. So you take an English sentence like 'The spirit is willing but the flesh is weak', you translate into Russian, which is what, um, the choice language by the US government was at that time, and you could, uh, translate back into English; and this is what you get, 'The vodka is good but the meat is rotten'. Um, so the government didn't think that was too funny, so they cut off the funding [LAUGHTER] and, um, it became the first AI winter. Um, so, so there was a period where, you know, AI research was not very active and was not well- very well funded. Um, so what went wrong here? Um, these were really smart people, right? Um, they just got a little maybe ahead of themselves. So two problems; one is that the compute was simply not there, right? It was millions or even billions of order of magnitude compared less than what we have, uh, right now. And also, the problems, the way they formulate them, intrinsically relied on camp- exponential search which, um, no matter how much compute you have, you're never going to, you know, um, win that race. Um, they also have limited, you know, information, and this is maybe a kind of a more subtle point that if I gave you infinite compute and I asked you to translate, I don't think you would be able to figure it out because it's not a computation problem. You just need to learn the language and you need to experience all the subtleties of language to be able to, you know, translate [NOISE]. But on the other hand, AI wasn't solved, but a lot of interesting, um, contributions to computer science came out of it. Lisp was- is- uh, had a lot of ideas that underlay ma- many of the high level programming languages we have, garbage collection, um, time-sharing, allowing, uh, multiple people to use the same- one computer at the same time, which is something that, uh, we kind of take for granted. And also this paradigm of separating what you want to compute, which is modeling, and how you do it, which is inference, which we'll get to a little bit later. Okay. So um, people forget quickly and, um, in the '70s and '80s, there was a renewed generation of people getting excited about AI again. Um, and this time it was all about knowledge, right? Knowledge is power and, um, there were a lot of expert systems which were created. And the idea is that if you could encode expert's knowledge about the world, then you could do kind of amazing things, and at the time the knowledge was encoded in generally a set of rules. Um, and there were a lot of programs that was written, and you'll notice that the, the scope is much narrower now. The goal isn't to solve it- all of AI, but to really focus on some choice and problems like diagnosing the diseases or converting customer's order parts into parts, and, uh- customer orders into parts and, uh, this was the first time that AI, I think, really had a real impact on industries. So uh, people were actually able to make useful, you know, products out of this. And knowledge did actually play a key ingredient in curbing this, you know, exponential growth that people were worried about. But of course, um, it didn't last long. Um, knowledge as deterministic rules was simply not rich enough to capture all the kind of nuances of the world. It required a lot of manual effort to maintain and, um, again, um, a pattern of over-promising and under-delivering that seems to plague, um, AI people, led to the collapse of the field and the kind of a second AI winter. Um, okay, so that's not the end of the story either. But actually it's not kind of really the beginning either. Um, so I'm going to step back further in time to 1943. So what happened in 1943? So there was, um, a neuroscientist, McCulloch; and logician, Pitts, who were wondering and marveling at how the human brain is able to do all of these kind of complicated things. And they wanted to kind of formulate a theory about how this could all happen. So they developed a theory of, um, artificial neural networks, um, and this is kind of you can think about the root as of, you know, deep learning in some sense. Um, and what's interesting is that they looked at, um, neurons and logic, which are two things that you might not kind of necessarily associate with each other, and showed how they were kind of connected mathematically. And a lot of that early work in this era were of- around artificial neural networks, was about studying them kinda from a mathematical perspective. Um, because at that time, the compute wasn't there, you couldn't really run any kind of training new models or um. And then 1969, something interesting happened. So there's this book by Minsky and Papert called Perceptrons. And this book did a lot of mathematical analysis. And it also showed that linear models, one of the results of many, was showing that linear classifiers couldn't solve the XOR problem. Um, the problem is- another way to think about the problem is basically given two inputs, can you tell whether they are the same or not, or different. And, um, so it's kind of not a- shouldn't be a hard problem but linear classifiers can do it. And for some reason, which I don't quite understand, it killed off neural nets research even though they had said nothing about if you had a deeper network, what it could do. Um, but it's often cited that this book, ah, swung things from people who were interested in neural networks to the field of AI being very symbolic and logic driven. Um, but there was always this kinda minority group, um, who were really invested in and believed in, um, the power of neural networks, and I think this was always just kind of a matter of time. So in the '80s, there was a renewed interest. Um, people kind of discovered or rediscovered the backpropagation algorithm which allowed a kind of, for a generic algorithm that could train these multilayer neural networks because single layer remember was insufficient to do a lot of things. And then one of the kind of the early success stories, as Yann LeCun in 1989, applied a convolutional neural network and was able to recognize hand digit- written digits, and this actually got deployed, um, by the USPS and was reading kind of zip codes. Um, so this was, you know, great, ah, but it wasn't until this decade that the, um, this area of neural networks really kind of took off, um, under the moniker deep learning. Um, and, you know, AlexNet in 2012 was kind of a huge transformation, um, where they show gains on the, kind of ImageNet ba- benchmark and overnight transformed the computer vision community. Um, AlphaGo as, you know, many of you know, and many kind of other, um, and there were kind of the rest is history. Okay, so- so there's this kind of two intellectual traditions. Um, you know, the name AI has always been associated with the kind of John McCarthy logical tradition, that's kind of where it started. But, um, as you can see that there is also kind of this neuroscience inspired tradition of AI, and the two are kind of really had some deep philosophical differences and over the decades fought with each other kind of quite a bit. But I want to pause for a moment and really think about, [NOISE] maybe if there were actually kind of deeper connections here. Remember McCulloch and Pitts, they were studying artificial and neural networks, but the connection was to logic, right? So from even in the very beginning, there is kind of this synergy that, you know, some- some people can kind of often overlook. And if you take a look at AlphaGo, which [NOISE] if you think about the game of Go or many games, it's a mathematically, you can write down the rules of Go in logic in just a few lines. So it's a mathematically well-defined logical- logic puzzle in some sense. But somehow, the- the power of neural networks allows you to develop these models that actually play Go really- really well. So this is kinda one of the deep mysteries that has, kind of, uh, I think is kind of o- opens standard challenge, you know, in AI. Um, as with any story it's not a full picture, and I want to point out on this slide that, AI has drawn from a lot of different, you know, fields, many of the techniques that we're gonna look at, for example, maximum likelihood, came from your statistics or games came from economics, optimizations, gradient descent, hence from- was, you know, in the '50s completely unrelated to AI. But these techniques kind of developed in a different context. And so AI is kind of like, you know, it's kind of like a New York City. It's- it's like a melting pot where a lot of the- these techniques that kind of unified and apply to kind of interesting problems. And that's what makes it, I think really interesting because of the- the new [NOISE] avenues that are opened up by kind of unique combinations of, um, existing techniques. Okay, so- so that was a really bre- brief history of, you know, where- how we got here. Um, now I want to pause for a moment and think about, you know, what is- what is the goal? What- what AI people are trying to do? And again this- this is kind of there's two ways to think about this which and- sometimes the conflation of these causes a lot of confusion. Um, so I like to think about it as AI as agents, and AI as tools. So the first view asks the kind of standard question of, how can we create or recreate intelligence? And the second one asked, you know, how can we use technology to kind of benefit, you know, society? [NOISE] And these two are obviously very related and they have, ah, a lot of shared technical, um, overlap, but, you know, philosophically they're kind of different. So let me kind of explain this a little bit. So the idea with AI agents is, and this is, I think a lot of what, um, um, gets associated with AI, um, and especially as, you know, with science fiction. That kind of, ah, po- portrayal certainly kind of encourages this kinda view where [NOISE] you're human- we're human beings. And what you do is you look in the mirror and you say, wow, that's must- that's a really smart person. And you think okay, how- how- what- what- what can humans do that is, you know, so amazing. Well, they can, um, they can see and they can perceive the world, recognize objects. Um, they can grasp cups and drink water and not spill it. [NOISE] Um, they can communicate using language as I'm doing to you right now. Um, we know facts about the world, [NOISE] declarative knowledge such as what's the capital of France and procedural knowledge like how to ride a bike. We can reason with this knowledge and maybe ride a bike to the capital of France. And then, really importantly, we're not born with all of this, right? We're born with basically nothing, none of these capabilities, but we are born with the capacity and potential to acquire these over time through experience. And learning it seems to be kind of this critical ingredient, which drives a lot of the success in AI today but also with, um, you, know, human intelligence it's clear that learning plays such a central role in getting us to the level that we're operating at. So each of these areas has kind of spawned entire sub-fields, and people in it are kind of wondering about how you can make artificial systems that have the language, or the motor, or the visual perceptual capabilities that, you know, humans have. But are we there yet? Um, and I would- I would like to think that we are, ah, very far. So if you look at the way that machines are, have been successful, it's all with a narrow set of tasks and, you know, millions or billions of examples, and you just crunch a lot of computation, and you can really kind of optimize, um, every- any tasks that you're going to come-come up with. Whereas humans operate in a very different regime. [NOISE] They don't necessarily do any, you know, one thing well, but they are have such a kind of diverse set of, you know, experiences, can solve a diverse set of tasks and learn from each individual tasks from very few examples. And still it's a kind of a grand challenge, in from a, uh, cognitive perspective, how you can build systems with this level of capability in that humans have. So the other view is, you know, AI tools. Basically we say okay well, you know, it's kind of cool to think about how we can, uh, you know, recreate intelligence. But, you know, we don't really care about making more, um, things like humans. We already have a way of, you know, doing that, that's called babies. [LAUGHTER]. Um, so when instead what we'd really like to do is not making something that's like a human but making systems that help humans. Because, you know, after all, we're- we're humans, I guess it's a little bit selfish but, um, we're in charge right now. Um, and- and a lot of this- this view and a lot of the success stories in AI are really different from the things that you expect, you know, this, uh, this humanoid robot to come into your house and be able to do. For example this is a project from Stefano Ermon's group. Um, there's a lot of poverty in the world and, um, part of it is- is just kind of understanding what's- what's going on and they had this idea of using, uh, computer vision on satellite imagery to predict things like, you know p-, uh, GDP. Um, so this is obviously not a task that, you know, the- our ancestors in Africa were like, you know, getting really good at. Um, but nonetheless it uses convolutional neural networks which is a technique that was inspired by, um, you know the brain and so that's- that's kind of interesting. Um, you can also have another application for saving energy by trying to figure out when to cool on datacenters. Um, as AI, is, uh, being deployed in more kind of mission critical s-, uh, situations such as self-driving cars or authentication. There are- there are a f- few th- new issues that come up. So for example, there are- thi- this phenomenon called adversarial examples, um, where you can take, um, these cool-looking glasses, you can put them on your face, and you can fool the computer, um, as- of- save our- our face recognition system to think that you're actually, you know, someone else. Um, or you can post these, uh, s- stickers on stop signs and you'd get this, uh- s- save our system to think that it's a, um, a speed limit sign. So there's obviously- there's- clearly these are, you know, big problems if we think about that the widespread deploy- deployment of AI. Um, there's also a less catastrophically but also p- pretty, um, you know, upsetting which is, uh, biases that you- many of you probably have read in the news about. So for example, if you take Malay which is a language that, uh, doesn't distinguish, um, in this writing form between he and she and you stick it into Google Translate. Um, you see that she works as a nurse but he works as a programmer, which is encoding certain, uh, societal biases, um, in the actual models. And one kind of an important point I wanna bring up is that, you know, it's -- it's how is machine learning and AI kinda working today? Well, it's, um, you know, society exists. Society is generating a lot of data. We're training on this data, and kind of trying to fit the data and try and mimic what it's doing and then using predictions on it. What could possibly go wrong, right? Um, and so- so certainly people- a lot of people have been thinking about, um, how these biases are kind of creeping up and is an open and active area of research. Something a little bit more, uh, kind of s- sensitive is, you know, asking well, these systems are being deployed to all these- all these people whether they kinda want it or- or want it or not. Um, and this, uh, this actually touches on, you know, people's, uh, you know, livelihoods. It actually impacts people's lives in a serious way. Um, so Northpointe was this company that developed a- a software called COMPAS that tries to predict how risky, um, criminal risk or how someone- how risky someone is essentially. Um, and ProPublica this organization realized whoa, whoa, whoa, whoa. You have this system that, uh, given an individual didn't reoffend is actually, um, more- twice as likely to classify blacks as incorrectly as, you know, non-blacks. So this is, uh, seems pretty problematic. And then Northpointe comes back and says actually, you know, I think we- I think we're being fair. Um, so given a risk score of 7, uh, we were fair because 60% of whites reoffended and 60% of blacks reoffended. Um, the- the point here is that there's- there's- there's actually no, um, solution to this in some sense sadly. Um, so people are finding or formulating different notions of fairness and equality between, um, how you predict or record it on different kind of, um, groups. But, um, or you can have different notions of fairness and which all seem reasonable from first principles but mathematically they can be, um, incompatible with each other. So this is- this is again an open area of research where we're trying to figure out as a society how, um, to deal with the schema that machine learning might be using these in kind of critical situations. Okay. So summary so far, um, there's an agent's view. Um, we're trying to really kind of dream and think about how do you get these capabilities like learning from very few examples that humans have into, you know, machines and a whole- maybe opening up a kind of a- a different set of technical capabilities. But at the same time, and we really need to be thinking about how these AI systems are affecting the real world. And things like security, and biases, and fairness all kind of show up. It's also interesting to note that, you know, a lot of the challenges in deployment of an AI system don't really have necessarily to do with, um, you know, humans at all. I mean, humans are incredibly biased but that doesn't mean we want to build systems kind of in our- in, um, that mimic humans and kind of inherit all the kind of the flaws that humans have. Okay. Any questions about this? Maybe I'll pause for a moment. So let's go on. Um, so what I wanna do next is give an overview of the different topics, um, in the course. Um, and the way to think about all this is that, um, in AI we're trying to solve really complex problems. The real world is really complicated. And- but at the end of the day we want to produce some software or maybe some hardware that actually runs and does stuff, right? And so there's a very considerable gap between these things. And so how do you even approach something like self-driving cars or, um, you know, d- diagnosing diseases? You probably shouldn't just like go sit down at a terminal and start typing because then, um, there- there's no kind of- no overarching structure. So what this class is going to do is to give you one example of a structure which will hopefully help you approach hard problems, and think about how to solve them in a kind of more principled way. Um, so this is a paradigm that I call the, um, modeling inference and learning paradigm. Um, so the idea here is that there's three pillars which I'll explain in a bit. And, uh, we can focus on each one of these things kind of in turn. So the first pillar is modeling. So what is modeling? The modeling is taking the real world, which is really complicated and building a model out of it. So what is a model? Model is a simplification that is mathematically precise so that you can, you know, do something with it, uh, on a computer. Um, one of the things that's necessary is that modeling, um, necessarily has to simplify things and, you know, throw away information. Um, so one of the kind of, uh, the, you know, the art is to figure out what information to pay attention to and what information to keep. Um, so this is going to be important for example when you work on your final projects and you have a real world problem, you need to figure out, um, you can't have everything and you have to figure out judiciously how to, um, manage your- your resources. So here's an example. If you want to for example build a- a system that can find, uh, the best way to get from point A to point B in a graph- in a- in a city you can formulate the model as a- a graph where nodes are points in the city, and edges rep- represent ab- ability to go between these points with some sort of cost, um, on the edges. Okay. So now once you have your model you can do, uh, inference. And what inference means is asking questions about your model. So here's a model you can ask for example how- what is the shortest path from, um, this point, uh, to this point. Right. And that's because now your model land is a mathematically well-defined, uh, problem now you can- it's within the realm of, uh, you know, deve- developing algorithms to, you know, solve that problem. And most of the inference is ki- being able to do these computations, um, really efficiently. And finally learning addresses the problem, where does this model come from? So in any kind of realistic setting, um, the model might have a lot of parameters. Maybe it has, you know, millions of parameters and how do you s- if it- if it- wants to be faithful to the, you know, real world that how do you get all this, uh, information there. Um, manually p- encoding this information turns out not to be a good idea. This is, um, in some sense what, um, AI from the '80s was trying to do. Um, so the learning paradigm is as follows. What we're gonna do is specify a model without parameters. Think about it as a skeleton. So in this case we have a graph but we don't know what the edge weights are. Um, and now we have some data. So maybe we have data of the form people tried to go from X to Y and they took 10 minutes, or an hour, or so on, um, and then from this data we can learn to fit the parameters of the model. We can assign, um, costs to the edges that kind of are representative of what the data is telling us, okay? So now in this way, we can write down a model without parameters, feed the data, apply a generic learning algorithm and get a model with parameters. And now we can go back and do, um, inference and ask questions, you know, about this. Okay. So this is kind of the- the- the paradigm. And I want to really emphasize that, you know, learning is not- as I've presented is really not about any one particular algorithm like nearest neighbors or neural networks. It's really a kind of a philosophy of how you go about approaching problems by defining a model and then not having to specify all the details but filling them in later. Okay. So here is the plan for the course. We're gonna go from low-level intelligence to high-level intelligence; and this is the intelligence of, um, of the, of the models that we're gonna be talking about. So first we're gonna talk about machine learning, and like I've kind of alluded to earlier, machine learning is going to be such a kind of an important building block of- that can be applied to any of the models that we kind of develop. So the central tenet in machine learning is you have data and you go to model, its main driver of a lot of su- successes in AI because it allows you to, in software engineering terms, move the complexity from code to data. Rather than having, you know, a million lines of code which is unmanageable, you have a lot of data which is collected in kind of a more natural way and a smaller amount of code that can operate on this data and this paradigm has really been, it's really been powerful. One thing to think about in terms of machine learning is that it, it is, requires a leap of faith, right. So you can go through the mechanics of down- downloading some machine learning code and you train them all but fundamentally it's about generalization, right. You have your data, you fit a model, uh, but you don't care about how it performs on that data; you care about how it performs on new experiences. And that leap of faith is something that's, um, I think gives machine learning its power but it's also a little bit, um, at first glance perhaps magical. Um, it turns out you can actually formalize a lot of this using, um, probability theory and, and statistics but that's kind of a topic for another time. Okay. So after we talk about machine learning, we're going to go back and talk about the, the simplest of models, right. So a reflex model is this. So here's a quiz. Okay. What is this animal? Okay, zebra. How did you get it so fast? Well, it's kind of a reflex, where your human visual system is so good, um, at, at doing these things without thinking. Um, and so reflex models are these, um, are models which just require a fixed set of computations. So examples like are linear classifiers, deep neural networks, um, and most of these models are the ones that people in machine learning um, use. Models is almost synonymous with, um, reflex on- in machine learning. The important thing that there's no feed for it. It just like you get your input bam, bam, bam, and here's your output. Okay, so that's, that's great because it's fast. But there's some problems that require a little bit more than that. Right. So for example here's another problem. Okay, quick, white to move. Where does she go? Okay, there's, there's probably like a few of you who are like chess geniuses, um, but for the rest of us, um, I have no idea. I don't know, wait, who's moving again? Um, so, so in these kind of situations, we need something perhaps a little bit more powerful than a reflex. We need agents that can kind of plan and think, um, ahead. So the idea behind state-based models is that we model the world as a set of states which capture any given situation like, uh, a position in a, in a game and actions that take us between states which correspond to things that, um, you can do in the, in this game. Um, so a lot of game applications fall in this as category of robotics, motion planning, navigation. Um, also some things that are might not be- you might think of, um, planning as such as gen- you know, generation, um. In natural language or generating an image, um, you are, uh, can be cast in this way as well. So there's three types of state-based models each of which we'll cover in, um, you know weeks of time. So search problems are the classic, uh, you control everything so you're just trying to fi- find the optimal path. There are cases where there's randomness. For example if you're trying to go from point A to point B, maybe there's traffic that you don't, you know, don't know about or, um, in a game there might be dice that are- die which are rolled, and, uh, there's a third category which are adversarial games which is cases where your playing an opponent who's actively trying to destroy you. So what are you gonna do about it? Um, so one of the games that we're gonna, uh, be talking about, uh, when we talk about games is Pac-Man; and one of the assignments is, um, actually building, um, a Pac-Man agent such as this. So, uh, while you're looking at this, think about how- what are the states and what are the actions and how would you go about you know devising a strategy for Pac-Man to eat all the dots and avoid all the ghosts? So that's something, uh, to maybe look forward to. There's also gonna be a competition. So we'll see how- who ends up at the top. Okay, so state-based models, um, are very powerful and a value to kind of have foresight. Um, but some problems are not really most naturally cast as state-based models. For example, you know, how many of you play Sudoku or have played it before? So as the goal of Sudoku is to fill in these, uh, um, blanks with numbers so that, um, every row and column and three-by-three sub-block has the digits 1 through 9. So there's a bunch of constraints. Um, and there's no kind of sense in which you have to do it in a certain order, right. Whereas the, the order in how you move in chess or something is, you know, pretty important. Um, so, so these type of problems, uh, are captured by these variable-based models where you kind of think about a solution to the problem as an assignment to the individual variables, under some constraints. So constraint satisfaction problems, we'll spent a week on that, um, these are hard constraints. For example two people can be- or a person can't be in the two places at once for example. Uh, there's also Bayesian networks which we'll talk about which are variable-based models with, uh, soft dependencies. For example if you're trying to track, um, you know, a car over time, these are the positions of the car. These variables represent the position of the cars and these, uh, E's represent, uh, the- the sensor readings of the position of the car at that particular position and inference looks like trying to figure out where the car was given all this kind of noisy sensor reading. So that's also gonna be another assignment where you're going to deal with. Okay. So finally, um, now we get to high-level. What's- so what is high-level intelligence here? Um, and I put logic here, um, for a reason that you'll see clear. Yeah, is there a question? The Sudoku, can you explain why it's not a state-based model? Yeah, so the question is why is not the- why is the Sudoku problem not a state-based model? Um, you can actually formulate this as a state-based model, um, by just thinking about the sequence of, uh, assignments. But it turns out that, um, you can formulate in a kind of more natural way as a variable-based model which allows you to, uh, take advantage of some kind of more efficient algorithm to solve it. Right, it's- think about these models as kind of different, um, analogy as like a programming language. So yes, you could write everything in you know C++ but sometimes writing in you know, Python or, or SQL for some things might be more- might be easier. Yeah. [inaudible] state based problem where you have both adversarial elements and an element of randomness? Yeah, so the question is how do you categorize state-based models where there is both randomness and an adversary? Um, we're also gonna talk about those as well. Um, and those would be- I, I would classify them as adversarial but there is also a random component that you have to deal with, games like backgammon. Yeah, question. [inaudible] Yeah, so the question is about whether, uh, some of these are more continuous and some of them are more discrete. Uh, I don't necessarily think of, uh, so a lot of the reflex models actually can work in continuous state spaces, for example images. Um, actually it's, it's almost a little bit of the opposite where, um, the logic-based models are in some sense more, you know, discrete but you can also have continuous elements, you know, in there, um, as well. Um, so in this class, we're mostly going to focus on kind of discrete objects because they're just going to be simpler to work with. Okay, so what is this logic? So the motivation here is that suppose you, um, wanted a little companion who, um, you could boss around and, um, help or help you do things, let's say; that's a better way to say it. Um, so you'd like to be able to say okay, you know, tell us some information, um, and then later you wanna be able to ask some questions and have the system be able to reply to you. Um, so, um, you know how- how would you go about doing this? One way you could think about is building a system that you can actually talk to using natural language, okay. So I'm actually going to show you a, a little demo, um, which, uh, is going to come up in the last assignment on logic; um, and well, let's see what you think about it. Uh, okay, so this is going to be a system that is, um, based on logic that I'm going to, um, tell the system a bunch of things and I'm going to ask some questions. So, um, I want you all to follow along and you see if you can, you know, play the role of the agents. Okay. So I'm going to teach you a few things like, um, Alice is a student, okay. So it says I learned something. Now let's, let's quiz, um, is Alice a student? Okay. Good. So that worked. Um, is Bob a student? What should the answer be? I don't know who's Bob. Um, okay. So now let's do, um, students are, uh, people. Um, Alice is not a person. I don't buy that [LAUGHTER] okay. So, um, okay it's, you know, it's doing some reasoning, right? It's using logic, it's not, uh, just, um. Okay. So now, let's do, um, Alice is from Phoenix. Phoenix is a hot city. I know because I've lived there. Um, cities are places, and if it is snowing, uh, it is, um, then it is cold. Okay, got it. So, um, is it snowing? I don't know. Um, so how about this? Okay. So if, um, a person is from a hot place and it is cold, then she is not happy, okay. True. Right, um. I guess those of you who have spent all your live in California would maybe appreciate this. But, um, okay, so ho- is it snowing now? How many of you say yeah, it's snowing? How many say no? You don't know? Okay. [inaudible] Ah, ah, [LAUGHTER] um, how about if I say Alice is, ah, happy. Okay, so is it snowing now? No, it should be no. Okay. So you, you guys were able to do this. Okay. So this is kind of an example of a interaction which, um, if you think about it has is ve- very different from where you would see kind of in a typical, um, you know, ML system where you have to show it millions of examples of one particular thing and it can do a kind of one task. This is much more of a very open-ended set of, um, I wish to say that the, the experiences are super rich but they're definitely diverse. I teach- I just give one statement. I say it once and then all of a sudden it has all the ramifications and kind of consequences that built in and it kind of understands in a kind of a deeper level. Of course this is based on, you know, logic systems. Um, so it is brittle but this is kind of just a proof of concept to give you a taste of what I mean when I say logic. So, ah, these systems need to be able to digest this heterogeneous information and reason deeply with that information. And we'll see kind of how, um, logic systems can do that. Okay. So that completes the tour of the topics of this class. Um, now I want to spend a little bit of time on course logistics. Uh, so I wanna- all the details here are online. So I'm not going to be complete in my coverage, um, but I just wanna give you a general sense of what's going on here. Okay. So what are we trying to do in this course? Um, so prerequisites, um, there's programming, um, discrete math and, ah, probability. So you need t be able to code and you need to be able to, um, do some math and, uh, some kind of basic proofs. Right? So these are the classes that are, um, required or at least recommended that you- or if you have some equivalent experience that's, you know, fine too. Um, and what we- what should you hope to get out of this course? Right. So one had- the course is meant to be giving you a set of tools using the modeling inference learning paradigm. It gives you a set of tools and a way of thinking about problems that hopefully will be really useful for you when you go out in the world and try to solve real world problems. Um, and also by- as a side product I also want all of you to be more proficient at your math and programming because those are kind of the core elements that, ah, enable you to do kind of interesting, you know, things in AI. So a lot of AI and you, you read about it, it's very flashy but really the foundations are still, um, just you know math and programming in some sense. Okay. So the coursework is homeworks, exam, and a project. That's what you have to do, um, Homeworks, there's eight homeworks. Each homework is a mix of writing- written and programming problems centered on a particular application covering one particular type of model essentially. Um, like I mentioned before there's a competition for extra credit. There's also some extra credit problems in the, in the homeworks, um, and when you submit code, we're gonna run- we have an auto-grader that runs. It's gonna run on all the test cases but you get a feedback of only a subset. So you can, um, it's like, you know, in machine learning, you have a train set, and you have a test set. So don't train on your test set. [LAUGHTER] Okay. So um, the exam is, ah, testing your ability to use the knowledge that you learn to solve new problems. Right. So there's, um, I think it's worth taking a look at exam because this, this kind of surprises people every- the exam is a little bit different than the types of problems that you see on, on the homework and there are kind of more problem, you know, solving. So the exam isn't going to be like a multiple choice like, okay, you know, um, you know, when was Perceptrons published or something like that. It's gonna be, here's a real life problem. How do you model it and how do you come up with a solution? Um, they're all going to be written. It's closed book except for you have a one page of notes and this is a great opportunity to actually, um, review all the material and actually learn the ah, the content in the class. Um, so the project I think is a, a really good opportunity to take all the things that we've been talking about in the class and, um, try to find something you really care about and try to apply it. Work in groups of three and I really recommend finding a group early, um, and as I emphasize it's your responsibility to find, you know, a good group. Right? Um, don't come to us later like one week before the project deadline and say, "Oh, you know, my group members they, um, they ditched me," or something. We really try to, try to nail this down use Piazza to- or your other social networks to find a good group. So throughout the quarter there's going to be these milestones for the projects. So, um, to prevent you guys from procrastinating into the very end, um, so there's gonna be a proposal where you try and brainstorm some ideas, progress report, a poster session which is actually a whole week before the final report is due, um, and the project is very open. So this can be, um, really liberating but also might be a little bit daunting. Um, we will hopefully give you a lot of structure in terms of saying okay, how do you define your task? How do you implement different, um, baselines or oracles? Which I'll explain later. How do you evaluate? How do you, um, analyze what you've done? And each of you will- each project group will be assigned a CA mentor, ah, to help you, ah, through the process and you're always welcome to come to my office hours or Dorsa's, or any of the CAs to get additional, um, help either brainstorming or figuring out what the next step is. Ah, some policies, ah, all assignments will be submitted on Gradescope, um, there are seven total late days you can use, and most two per assignment. After that there's no credit. Um, ah, we're gonna use Piazza for all communication so don't email us directly. Leave a post on Piazza. If- I encourage you to make it public if it's, it's not sensitive, but if it's, you know, personal, then obviously make it private, um, and try to help each other. We'll actually award some extra credit for students who help answer, um, other student's questions. So all of the details are on the course website. Okay. So one last thing and it's really important and that's the Honor Code. Okay. So especially if you're, um, you know, you've probably heard this if you've been at Stanford. If you haven't, then I wanna really kind of make this clear. So I encourage you all to have- collaborate, discuss together. But when you- when it comes to actually the homeworks, you have to write up your homework and code it independently. So you shouldn't be looking at someone's writeup. You shouldn't be looking at their code. Um, and you definitely shouldn't be copying code off of GitHub. Um, um, that's hopefully should be, you know, obvious and maybe less obvious, you should not- please do not post your homework assignments on GitHub. I know you're probably proud of the fact that your Pac-Man agent is doing really well but please don't post on GitHub because then that's going to be our Honor Code violation. Um, when debugging, um, with- if you're working together, it's fine to as long as it's kind of looking at input-output behavior so you can say to your partner, "Hey, I put in this, um, input to my test case and I'm getting a 3. What are you getting?" So that's fine but you can't. Remember don't look at each other's code. Um, and to enforce this, we're gonna be running MOSS, which is a software program that looks for code duplication, um, to, to make sure that, ah, the rules are being followed and, you know, changing one variable name is- or you'll be so- anyway enough said. [LAUGHTER] Just don't, don't, don't do that. Okay? Any questions about this? I wanna make sure this is important or about any of those logistics. Yeah. [inaudible] The final project, ah, you can put on GitHub. Yeah. Yeah. Yeah, private GitHub repos, uh, is fine. Yeah, question in the back? Is it necessary to have a group or can you do a solo project? Uh, the question is can you, can you do a solo project? You can do a solo project, you can do a project with two people, or you can do a project with three. I would encourage you to try to work in, uh, groups of three because you'll be able to do more as a group, and there is definitely, uh, you know, it, it, it's not like if you do a solo project we'll be expecting like one third of the, the work. So okay. Anything else? All right. Okay. So in the fi- final section, I want to actually delve into s- some technical details. Um, and one thing we're going to focus on right now is, um, the, kind of inference and learning components of, of this course. So I'm going to talk about how you can approach these through the lens of, you know, optimization. So this is going to be, uh, it might be a review for some of you but hopefully, it's gonna be a, a good, um, you know, way to get everyone on the same page. Okay. So what is optimization? There's two flavors of optimization that we care about. There's, uh, Discrete Optimization, where you're trying to find the best, uh, discrete object. For example, you're trying to find the best, uh, path or the path P that minimizes the cost of that path. Um, we're going to talk about one algorithmic tool, um, based on Dynamic Programming which is a very powerful way of solving these, um, complex optimization problems. Um, and the key, you know, property here is that the set of paths is huge and you can't just, uh, trial them and compute the cost and choose the best one. So you gonna have to choose something clever. The second brand of optimization is continuous optimization and formally this is just finding the best of vector of real numbers that satisfies or minimizes some objective function. So a typical place this shows up is in learning where you define, uh, objective function like the training error and you're trying to find a weight vector W. So this notation just means it's a list of numbers, D numbers that minimizes the training error. And we're going to show that gradient descent is, uh, uh, easy and a surprisingly effective way of solving these, um, continuous optimization problems. Okay. So to introduce these two ideas, I'm going to look at two, um, problems and trying to kind of work through them. So this might be also a good, um, you know, way to think about how you might go approach a, you know, homework problems. And I'll try to kind of talk you through this, um, in a bit more detail. Okay, so the first problem is, um, you know, computing edit distance. Um, and this might not look, you know, like an AI problem, but a lot of, ah, AI problems have this as kind of a, you know, building block if you wanted to do some sort of matching between, um, you know, two words or two, um, biological sequences. So the input is you're given two strings. Um, we're gonna start writing over here on the board just to work this out. So given two strings, um, S and T. Um, so for example, um, a cat and um, the cats. Okay. So these are two strings and you wanna find the minimum number of edits that is needed to take transform S into T. And by edits I mean you can insert, um, a character like you can insert S, you can delete characters, I can delete this A and you can substitute one character for another. So you can replace this A with a T. Okay. Um, so here's some examples. What's the edit distance of cat and cat? It's 0, you don't have to do anything. Cat and dog is 3, cat and at is 1, you insert the A or insert a C. Um, cat and cat is 1, um, and a cat and the cats is 4. Okay. So the challenge here is that there are, ah, quite a different number of ways to insert and delete. Right, so if you have a string of- that's very long there's just way too many things to like just try out all of them. Okay, so then, how do we, how do we go about, um, coming up with a solution? So any ideas? Yeah. [inaudible] simplify the output in terms of saying that the substitution tells us we considered [inaudible] deletion peoples who considered a substitution or vice-versa by saying like an empty character. Yeah, yeah. So let's try to simplify [NOISE] the, the, the problem a bit. And building up on your what you, um, what was said. So, um, one thing to note is that okay, where so the general principle, let me just write the general principle, um, is to, you know, reduce the problem to a simpler problem because then you can hopefully solve- it is easier to solve, and then you can maybe keep on doing that until you get something that's trivial. Okay. So there's maybe two observations we can make. One is that well, we're technically saying we can, um, you know, insert into S right but if we insert into S, it makes the problem kind of larger in some sense, right? I mean that's not, that's not good. That's not reducing the problem. But, but whenever we insert into S, um, we probably want to insert things which are in T. We wanna like cancel something out, right? So we wouldn't insert a K there for any reason. We probably wanna insert a S in which case no S matches that and then we've reduced that problem, right? So we can actually think about, you know, inserting into S to S as equivalent to kind of deleting from, um, from T. Okay, does that make sense? All right. So another observation we can make is that, you know, we can start inserting anywhere. We can start inserting here and then jump over here and to this. But this just introduces a lot of, um, you know, ways of doing it which all kind of result in the same answer. So why don't we just start more systematically at one end and then just proceed and try to chisel-off the problem, um, kind of let's say from the end. Okay, so start at the end? Okay, so, so now we have this problem and to draw a problem in a little box here. Um, so let's start at the end. Yeah, question. What's the reasoning used to reach that principle start at the end? [NOISE]. [NOISE] the question is why are we starting at the end as oppo- well, the idea is that if you start at the end then you have kind of a more systematic and consistent way of, you know, reducing the problem. So you don't have to think about all the permutations of where I can delete and substitute. Why is it more systematic to go from the right to the left than from the left to the right? We can also do it left to right. So the end or the start is both fine. This is just- I just picked the end. Yeah. Are we not starting at the end and then give us the optimal strategy? Yeah, the question is how do we know that starting, um, at one end can give you the optimal strategy? Um, so, you know, if you wanted to prove this more rigorously there's some work but, um, I'll just try to give you a, you know, an intuitive answer. Um, suppose you didn't start at the end, and you just made a sequence of steps like I insert here, I delete here, and then I went over here and um, did all those operations to S. I could have equivalently also just sorted those by, you know, where it was happening and then just proceeded from one end to the other, and I would arrive at the exact same answer. So without loss of generality, I can start at that. Any other questions? Okay. So yeah. Instead of doing this wouldn't the more viable [NOISE] approach be that trying to recognize some patterns instead of doing this. I think between the two strings "s" and "t" like some form of- some sort of [NOISE] pattern [inaudible] string. Yeah. So the question is, maybe you can recognize some patterns. Uh, it's like okay, oh, cat. That's- that's- maybe those should be lined up. Um, I guess these examples are chosen so that these patterns exist, but we want to solve the problem for cases where, um, the pattern might not be obvious. So it could be- we want to work it for- it to work for all strings. Maybe there is no pattern, and we still would want to- kind of an efficient algorithm to do it. Yeah. Can't we just like use dynamic programming? Like we go one by one, there was always like [inaudible] - Yeah. Either we're doing, um, substitution, or, um, otherwise it's like the same character. Or we have to insert- Yeah. - um, and then we keep going, and you just like [NOISE] remember each like to- to strings that we have at one point- Uh-huh. -so that if we calculated that we don't have to do it again. Yeah. Yeah. That's it. Yeah. Yeah. Yeah. Great idea. Let's do dynamic programming. Um, so that's what I'm kind of trying to build up from- uh, build up to. Okay so, um, so if you look at this- so dynamic programming is a kind of a general technique that essentially allows you to express this more complicated problem in terms of a simpler problem. Uh, so let's start with this problem. If we start at the end, um, if the two match then, well we can just immediately, um, you know, delete these two and that's- it's gonna be the same, right? So we can get- we are gonna get some free rides there. Okay, but when they differ, um, now we have many options. So what we could- what could we do? Well, we could, um, um, you know substitute. Okay, we can change the "t" to an "s". So what does that leave us with? So I can do a cat, [NOISE] "t" is the- the cat, the- [NOISE] Okay, so I can substitute. [NOISE] Um, [NOISE] okay. Um, what else can I do? [NOISE] Someone say something I can do. [NOISE] So I can insert, um, insert where into- [OVERLAPPING] So I can insert an "s", right? Yes. [NOISE] But that's the same as, you know, [inaudible] deleting from "t". So by, uh- you can basically also just delete this "s". Um, so this is our cat, [NOISE] and I deleted this "s" from "t". Okay, so this is, um, let's call it, uh, you know, um, I guess let's call this insertion- it's technically insertion [NOISE]. And then finally what can I do? [NOISE] I can also remove "t". So [NOISE] a, ca, the, cats. Okay, so this is delete. [NOISE] And right now you're probably looking at this like, well, obviously, you know, you sho- you should do this one. But in general it's hard to tell. What if I just give you some arbitrary strings, you know, who knows what the right answer is. Um, so in general how do you pick? Yeah. In the second one, the "t" is supposed to be for cats. [NOISE] [inaudible] You mean this one? Yeah. So here I inserted an "s", right? But then because there's two s's here, I just canceled them out and [NOISE] what was left [inaudible] So you can think about this as really deleting from- What if I'm considering [NOISE] [inaudible] Like in the original problem you said we're transferring "s" to "t". Yeah. Yeah. Yeah. So, um, um, because of this I'm kind of trying to re-frame the [NOISE] problem a little bit. Okay, so which one should I choose? Yeah. What about the substitution the other way? Um, the substitution the other way meaning change- "s" to "t". Sorry there's too many s's and t's here which [LAUGHTER] is going to be a bit unfortunate. And then replace the last s in cats with "t". Oh, you could- How do we eliminate that [inaudible] [NOISE] Um, that's- you can think about that as kind of equivalent. So, if you identify two letters that you want to make the same, then [NOISE] you could- you can replace the one to be the other, or the other to be that. I mean if- officially we've been kind of framing it as we're only editing "s" which is the reason that it's asymmetric. [NOISE] Okay, so which one of these? Door "a" door "b" or door "c"? Yeah. Would you look [inaudible] between "s" and "t" for every step [NOISE] [inaudible] because there's "cat" in both of them? Yeah, so you could try to look inside but, um, but remember these are- might be really complicated. So you- we wanna kind of a simple mechanized procedure to tell. [NOISE] What about the next letter? The next letter. "t" [inaudible] Um, yeah let's- let's pretend these are- you- you can't see inside them. Okay. [LAUGHTER]. Keep going with each of the different cases. Yeah, okay, so let's keep on going. [NOISE] So, I'm not going to draw everything, but you can also try to break this down into- maybe there's three actions here, and three actions here. All right. Um, and at the end of the day you hopefully have a problem that's simple enough, that, um, where "s" equals "t' or something then you're done. Um, but then, you know, how- how do I- how do I know? Suppose I've solved this. Suppose if someone just told you, okay, I know this cost, I know this cost, I know this cost. What- what should you do? [inaudible] Yeah, you should take the minimum, right? Like remember we want to minimize the edit distance. So, um, there's three things you can do. Each of them has some costs of doing that action which is, you know, one. Every edit is the same cost. And then there's a cost of, you know, continuing to do whatever you're doing. And so we're just gonna take the minimum over those. Yeah. [inaudible] How do we know that that's, like- that's the maximum amount of distance that we have to take? Yeah, so I was trying to argue that, um, with- if you're going to right to left, it's, uh, without loss of generality. Because if you've- went left to right, or in some other order, you can also replay the edits, um, in order. [inaudible] [NOISE] one letter that you needed one assertion like [inaudible] like upstream. But if you went from like the left it looks like as if you're [inaudible]. [NOISE] Yeah. [inaudible] Okay. Yeah. I think it works. [NOISE] Um, okay, so- so let's, um, try to code this up and see if we can make this program work. Okay, so, um, I'm gonna do editDistance. Can everyone see this? Okay, so, um, so I'm gonna define a function that takes two strings, and then I'm going to um, define a recurrence. So, recurrences are- are, I guess, one word I haven't really used, but this is really the way you should th- kind of think about, uh, dynamic programs, and this idea of taking complex problems and breaking it down. It's gonna show up, in you know, search problems, MDPs, and, you know, games. So, I guess it's something that you should really be comfortable with. So, let's um, define recurrence, uh, as follows. Um, so remember at any point in time, I have, uh, let's say a sub problem, and since I'm going right to left, I'm only considering the first, um, "m" letters of "s" and the first letter "n" letters of "t". Okay, so recurse is going to return the minimum edit distance between two things, the first "m" letters of "s", and the first "n" letters of "t". Um, I'm gonna post this online so you guys don't have to, like, copy- try to copy this. Um, okay, so, um, okay, suppose I'm gonna- I'm gonna define this function. Uh, if I have this function what should I return? Recurse of-. [inaudible] So "m" is an integer, right? So "n" is an integer, so I'm going to return the length of "m" and the length of "n". Okay, so that's kind of, uh, the initial state. [OVERLAPPING] Sorry. Yup. Okay. Um, All right. So now you need to fill out this function. Okay, so let's- let's um, consider a bunch of cases. So here's some easy cases. Suppose that, um, "m" is zero, right? So I have- comparing an empty string with something that has "n" letters. So, what should the cost of that be? [NOISE] I heard some mumbling. [OVERLAPPING]. It should be "n" [NOISE] and symmetrically if "n" is 0 then result should be "m", um, and then if now we come to the kind of initial case that we consider which is the end [NOISE] match a match. So, if "s" um, the last letter of "m", you know, this is 0-based indexing. Um, so that's why there's a minus 1. So, this matches. [NOISE] Then what should I do? [NOISE] So, now we reduce this to a sub problem, right? [inaudible] So, I have "m" minus 1 and "n" minus 1. Okay. And now comes the fun case which we looked at. So there's- um, in this case the last letter doesn't match. So, I'm gonna to have to do some sort of edit, can't just let it slide. Yeah. Question. Would you- do you need a full "s" to "t" compare or "s" through "m" and then "t" through "n" to compare? Worse than doing a full s, a compare. [OVERLAPPING] rather than waiting until, um, first- Yeah. -stream at the last slide than that. There- there's probably a way you can make this more efficient. I'm just gonna try to get the basic thing in there. Okay. So substitution. Okay. So what's a cost of a substitution? I pay 1 to do the substitution, but and in- as a reward I get to, um, reduce the problem to n minus 1 and n minus 1, right? So I lop off a letter from s and I lop off a letter from t. So what else can I do? So I can, um, you know, delete. [NOISE] So that also costs 1. And when I delete, I delete from s and then n. So this remains the same. And then now you can think about the insertion, um, is n minus 1, right? Because remember insertion into s is deletion from t, that's why this is n minus 1. Okay. And then the result is just gonna be a minimum of, uh, all these things. Okay. Return result. Okay. So just, uh, and then, how do I call this function? Um, a cat, the cats. [NOISE] So let me print out the answer. Um, let's see if it works. Okay. Print out 4. Therefore, I conclude it works now. [LAUGHTER] I mean if you were doing this, uh, you would probably want to test it some more, but in the interest of the time, I'll kind of move on. So let me just kinda refresh. Okay. So I'm computing this at a distance between two strings and we're gonna define a recurrence that works on sub problems, where the sub problem is the first m letters of s and the first n letters of t. And the reason I'm using integers instead of, um, strings is to avoid like string copying, um, implementation detail, but it doesn't really matter. Um, so base cases. So you wanna reduce your problem to a case where it's- it's trivial to solve. Um, and then we have the last letter matches. And then we have a letter doesn't match and you have to pay some sort of cost. I don't know which action to take. So I'm gonna take them, you know, minimum of all of them. And then I call it by just calling, you know, recurse. Okay. So this is great, right? So now I have a working thing. [NOISE] Um, let's try another test case. So I'm gonna make this. Um, so if I do times 10, this, uh, basically, uh, replicates this string 10 times. So it's a- it's a long string-longer string. [NOISE] Okay. So now I'm gonna run it. [OVERLAPPING] Maybe I shouldn't wait for this. Is there a base case? Um, there is a base case, I- I think that it expanded- it's- what- what's wrong with this code? Very slow. Um, yes, it's very slow. Why is it slow? [BACKGROUND] Yeah, right? So- so I'm recursing. [NOISE] Every point recurses three times. So you kind of get this exponential, you know, blob. Um, so there's kind of a- how do you solve this problem? [BACKGROUND] Yeah. You can memo I think I heard the word memoize, which is another way to kind of think about. Memorize plus, um, I guess, recurrences is dynamic programming, I guess. Um, so I'm gonna show you kind of this, um, way to do it which is pretty, uh, uninvasive. Um, and generally I recommend people. Well, get the slow version working [NOISE] and then try to make it faster. Don't try to be, you know, too slick at once. Okay. So I'm gonna make this cache, right? And I'm gonna say if m, n is in the cache, then I'm gonna return whatever's in the cache. So cache is just a dictionary mapping. Um, the key which is, um, identification of the problem I'm interested in solving, and the result which is the answer that I computed. So if I already computed it, I don't need a computer again, just return it. And then at the end, if I have to compute it, then, um, I have to put this in the cache. [NOISE] Okay? So three lines or four lines, I guess. Yeah. [BACKGROUND] [NOISE] Yeah. That's a great point. Uh, this should be outside of the recurse object. Yeah. Glad you guys are paying attention. Um, otherwise, yeah, it would do basically nothing. Any other mistakes? [LAUGHTER] Yeah. Um, there is also function decorators that like implement memoizing for you. In this class, are you okay if we use that or would you rather us like make our own in this case? Um, you can use the deco- you can be fancy if you want. Okay. Um, yeah. But- but I think this is, you know, pretty transparent. Easy for learning purposes. Okay. So let's run this. So now it runs instantaneously as opposed to- I actually don't know how long it would have taken otherwise. Okay. And sanity check for t is probably the right answer because there's four was the original answer and multiply by 10. Okay. any other questions about this? [NOISE] So this is an example of, you know, kind of basic, uh, dynamic programming which are, uh, you'd solve a problem trying to formulate it as a recurrence of a complicated problem in terms of smaller problems. Um, and like I said before this is gonna kind of show up, um, um, over and over again in this class. Yeah. [BACKGROUND] Yeah. So the question is why does this reduce, uh, redundancy. [NOISE] Is that right? Um, so maybe I can do it kinda pictorially. Um, if you think about, let's say, you have a, um, a problem here, right? And this gets, um, you know, reduced to, um, um, I'm just making kind of a arbitrary, um, diagram here. So this problem gets reduced to these two. And this problem gets reduced to these two, um, and- and so on, um, right? So if you think about- if you didn't have memoization, you will just be paying for the number of paths. Every path is a kind of you have to compute from scratch. Whereas, if you do memoization, you pay the number of nodes here, which a lot of this has shared like here. Um, you know, once you compute this, no matter if you're coming from here or here, you're kind of using the same value. Okay. So let's- let's move on. So the second problem, um, we're gonna talk about is, uh, has to do with continuous optimization. [NOISE] And the motivating question here is how do you do, um, regression? Which is a kind of a bread and butter of, um, you know, machine learning here. [NOISE] So here we go. Regression. Okay. So imagine you get some points. Okay, so I give you a point which is 2, 4. Then I give you another point, let's say 4, 2. And so these are data points, you want to, let's say, predict housing price from, you know, square footage or something like that. You want to predict health score from, um, your blood pressure and some other things. So this is pretty common in machine learning. And the question is how do you fit a line? I'm going to consider the case where your line has to go through the origin, just for simplicity. Um, so you might want to like find, you know, a fit. Two points is maybe kind of a little bit degenerate, but that's the simple example we are going to work with. In general you have lots of points and you want this to fit the line that best kind of, uh, is close to the points. Okay, so how do you do this? So there's a principle called least squares, which says, well, if you give me a line which is given in this case by a slope w, I'm going to tell you how bad this is. And badness is measured by looking at all the training points, and looking at these distances. Right. So here I have, you know, this particular, uh, a particular, let's say point x_i. If I hit it with a w, then I get, basically the, uh, you know, the y-intercept here, not the y-intercept but the- like the y value here. That's my prediction. The real value was y_i, which is, you know, up here. And so if I look at the difference, I want that difference to be zero. Right. So in, in least squares, I square this, and I say, I want this to be as small as possible, right. Now, this is only for one point. So I'm going to look at all the points. Let's suppose I have n points, and that's a function that I'm going to call f of w, which basically says, for a given weight vector, which is a slope, give me a number that characterizes how bad of a fit, um, this is. Where 0 means that I fit everything perfectly, and large numbers mean that I fit poorly. Okay? All right. So, so that's your regression. So how do I solve a regression problem? So how do I optimize this? Can you do this in your head? So if I actually had these two points, what should w be? Okay, it doesn't matter. We'll, we'll compute it. So how do we go about doing this? So one principle, which is maybe another general takeaway is, abstract away the details. Right. Um, this is also true with the dynamic programming, but sometimes, you know, you get- if you're too close to the board, and you're looking at, oh man, these, these points are here and I need to fit this line. How do I do that? You kind of get kind of a little bit stuck. Why don't we think about this f, as say some function? I don't, I don't really care what it is. And let's plot this function. Okay. So now this is a different plot. Now, this is, ah, the weight, and this is f of w. [NOISE] Always label your axes. And let's say this function looks like this. Okay. So which means that for this slope, I pay this, you know, amount, for this slope, I pay this amount and, and so on. And what I want to do, I want to minimize f of w, which means, I want to find, um, the w which, um, has the least value of f of w, right? Question? Okay. So you take the derivative. So what is the derivative giving you? It tells you where to move, right? So if you look over here, so you can- in general, you might not be able to get there directly, in this actually particular case you can because you can solve it in closed form, but I'm going to try to be more general. Um, so you start here. This, this derivative tells you, well, the function is decreasing if you move to the right. So then you should move to the right. Whereas over here, if you end up over here, the derivative says, the function is decreasing as we move to the left. So you should move to the left, right? So what I'm going to introduce is this, uh, algorithm called gradient descent. It's a very simple algorithm. It basically says, start with some place, and then compute the derivative, and just follow your nose. Right? If the derivative says it's negative, then just go this way. And now you're on a new point, you compute the derivative again, you descend, and now you compute it again. And then maybe you compute the derivative and it says keep on going this way and maybe you overshoot, and then you come back. And then, you know, hopefully you'll end up at the minimum. Okay. So let's try to see what this looks like in code. So gradient descent is one of the simplest algorithms, but it really underlies essentially all the algorithms that you people use in machine learning. So let's do points. We have two points here. Um, and I'm going to define, um, some functions. Okay, so f of w, so what is this function? So I'm going to sum over all the different, um, you know, and basically at this point it's converting math into Python. So I'm going to look at all the points. So for every x, y, what the model predicts is w times x minus y. And if I square that, that's going to be the error that I get on that point. Then, if I sum over all these errors then I get my objective function. Okay. Array of- so yeah. So you can put array here if you want, but it doesn't matter. It's, it's actually fine. Okay. So now I need to compute the derivative. So how do you compute the derivative? So if your calculus is a little bit rusty, you might want to brush up on it. So what's the derivative? Re- remember we're taking the derivative with respect to w, right? There's a lot of symbols here. Always remember what you're taking derivative with respect to. Okay. The derivative of the sum is the sum of the derivative. So now I need to take the derivative of this. Right. And what's the derivative of this? Something squared, um, you bring the two down here, and now you multiply by the derivative of this. And what's the derivative of this? Should be x. Right? Because this is a- y, this is a constant, and w derivative- w times x with respect to w is x. Okay. So that's it. Okay, so now let's do gradient descent. Let's initialize with w equal 0. Then I'm going to just, um, you know, iterate a hundred times. Normally, you would set some sort of stopping condition, but let's just keep it simple for now. Okay, so for every moment, I'm going to- I have a w, I can compute the value of the function, and also take the gradient of the derivative. Gradient just means derivative in higher di- dimensions, which we'll want later. Um, okay. And then, what do I do? I take, uh, w, and I subtract the, the gradient. Okay. So remember- okay, I'll be out of here. Okay. So, uh, I take the gradient. Remember I want to have the gradient. Uh, gradient tells me where the function is increasing, so I want to move in the opposite direction. And eta is just going to be this, uh, step size to, um, keeping things under control. We'll talk more about that next time. Okay, so now, I want to do, print out what's going on here. So iteration, print out the function, and t value. Okay. All right, so let's compute the gradient. And, um, so you can see that the iteration, we first start out with w equal 0. Then it moves to 0.3, then it moves to 0.79999999 and then it looks like it's converging into 0.8. And meanwhile, the function value is going down from 20 to 7.2 which happens to be the optimal answer. So the correct answer here is 0.8. Okay, so that's it. Next time we're going to keep, uh, we're going to start on the machine learning lecture. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Game_Playing_1_Minimax_Alphabeta_Pruning_Stanford_CS221_AI_Autumn_2019.txt | All right. Let's start guys. Okay. So a few announcements before we start. So, um, if, you have- if you need OAE accommodations, please let us know if you haven't done that already. So you need to let us know by October 31st because we need to figure out the alternate exam date. So, uh, we'll get back to you about the exact like details around the alternate exam date, but let us know by October 31st. Um, project proposals are also due this Thursday. So do talk to the TAs. Do talk to us, come to office hours, all that. Okay. All right. So today, I wanna talk about games. So, um, so we've started talking about this idea of state-based models, like, the fact that if you wanna have state as a way of representing, uh, everything about- everything that we need to plan for the future. We talked about search problems already. We have talked about MDPs where we have a setting where we are playing against the nature and, and the nature can play, uh, like probabilistically. And then based on that, we need to respond. Uh, and today, we wanna talk about games. So, so at the setup is, is we have two players playing against each other. So we're not necessarily playing against nature which can act probabilistically. We're actually playing against another intelligent agent that- that's deciding for, for his own or her own good. So, so that's kind of the main idea of, of games. All right. So, so let's start with an example. So this is actually an example that we are gonna use throughout the lecture. All right. So the example is, we have three buckets. We have A, B and C. And then you are choosing one of these three buckets. And then I choose a number from the bucket. And the question is, well, your goal here is to maximize the chosen number and the question is, which bucket would you use? Okay. So, so how many of you would choose bucket A? No one trusts me, okay [LAUGHTER] No one trusts me, good. How many of you would choose B? Okay. So now, now people don't trust me [LAUGHTER]. How many of you choose C? Okay. So, so there's a number of people there too. So, so how are you making that decision? So the way you are making this decision is, if you choose A, you're basically assuming that I'm not playing like, like try- I'm not trying to get you. I might actually give you 50. And if I give you 50, that'll be awesome. And you have this very large value that you are trying to maximize. If you think I'm going to act adversarial, and go against you and then try to minimize your, your number, then you're going to choose bucket B, right, because, because worst-case scenario, I'll choose the, the lowest number of the bucket and, and in bucket B, the lowest number is one which is better than minus 50 and minus 5. So, so if you're assuming I'm trying to, like, minimize your good, then you're gonna choose bucket B. And if you have no idea how I'm playing and, and you're just assuming maybe I'm acting ast- stochastically and maybe I'm, like, flipping a coin and then based on that deciding like what number to give you, you might choose C because in expectation, C is not bad, right? Like, C, like, if you just average out these numbers and then pick the average values from A, B, C- A, A and B and C, the average value for A is 0, for B, it's 2 and then for C, is, um, 5. Right, so, so, so if I'm playing it stochastically, you might say, well, I'm probably going to give you something around 5. So you would pick C. Okay. So, so today we wanna talk about these different policies that you might choose in these settings and how we should model our opponent and how we formalize these problems as game problems. So this is an example that, that we just started. Okay. So, so to- the plan is to formalize games, talk about how we compute values in the setting of games. So we're gonna talk about expectimax and minimax. And then towards the end of the lecture, we're gonna talk about how to make things faster. So we're gonna talk about evaluation functions as a way of making things faster, uh, which is using domain knowledge to, to, to define evaluation functions over notes. We're also gonna talk about alpha-beta pruning, which is a more general way of pruning your tree and making things faster. Okay. All right. So that's the plan for today. Okay. So we just defined this game and a way to, to go about the scheme is to create something that's called a game tree. A game tree is very similar to a search tree. So this might remind you of search tree where we talked about it like two weeks ago, right. So, so the idea is, we have this game tree where we have nodes in the- in this tree and each node is a decision point of a player. And we have different players here, right, like I was playing or you were playing or we have two different people, like, playing here. So these decision nodes could be for what one of the players, not both of them. And then each root to leaf path is going to be a possible outcome of the game. Okay. So, like, it could be that I'm choosing minus 50 and then your decision was to pick bucket A so that path is going to give us one possible outcome of how things can go. Okay. So, so that is what the tree is basically representing here. Okay. So the, the nodes in, in the first level are the de- decisions that I was making and then the, the first node, the root node are the decisions that you were making in this setting. So if we were to formalize this a little bit more, we're gonna formalize this problem as, as a two player zero sum game. Okay. So, so in this class, a- at least, like, today, we are going to talk about two-player games where we have an agent and we have an opponent. And then we are going to talk about policies and values and for all of those things, think of you- yourself as being the agent. So you're playing for the agent. You're optimizing for the agent. Opponent is this opponent that's playing against you. Okay. So we are also going to, to, like, today, we are going to talk about games, uh, that are turn-taking games. So we're going to talk about things like chess. We're not talking about things like rock-paper-scissors. We will talk about that actually next time when we have, like, like, simultaneous games where you're playing simultaneously. Today we are talking about turn-taking settings. Two-player turn-taking settings. Full observability, we see everything. We are not talking about, like, games like poker where you don't necessarily see, like, you have partial observation and you don't necessarily see the hand of your opponent. Full observation, two-player and also zero-sum games. And, and what zero-sum means is, if I'm winning and if I'm getting, like, $10 from winning, then my opponent is losing $10. So, so the total utility is going to be equal to zero. If I win some amount, my opponent is losing the same amount. Okay. All right. So, so what are the things that we need when we define games? So, so we need to know the players. We have the agent, we have the opponent. In addition to that, you need to define a bunch of things. This should remind you of the search lecture or the MDP lecture. So you might have a start state, as S start. We have actions which is a function of state, which gives us the possible actions from state S similar to before. You have a successor function similar to search problems. So a successor function takes a state and action and it tells us what's the resulting state you're going to end up at. And this- and, and you have an end- this end function which checks if you're in an end state or not. And the thing that's different here, there are two things that are different here. One is this utility function. And the utility function basically gives us the agent's utility at the end state. Okay. So one thing to notice here is, is that the utility only comes at an end state. So after you finish the game, like, I've played my chess and I won chess now and this is this chess game. And then, then I get my utility. Like, as I'm making moves, like, through my, my chess game, I'm not getting, getting any utility. Like, you only get the utility at an end state. And, and the way we're defining the utility, is we're defining it for the agents because again we are, we are replaying from perspective of the agent. So, so what would be the utility of the opponent? Minus that, right. So, so negation of that would be the utility of opponent. Okay. I've heard about partially observable Markov decision process. Is this, like, kind of, what it is? Like, is this partially observable? Okay. So the question is, is this partially observable Markov decision process? This is not a partially observable Markov decision processes. Um, there are classes that talk about, like there's- this decision under uncertainty by Mykel Kochenderfer's class that actually teaches that. So you should, you should, you should take classes on that. This is not a partially observable Markov decision process. This is fully observable. You have two players playing against each other. It's a very different setup. [inaudible]. So, so the, the question is, are there any randomness here? And, and so far, I haven't discussed any randomness yet. Later in the lecture, I'll talk actually about the case where there might be a nature in the middle that acts randomly and then how we go about it. But so far, two players playing against each other. Okay. All right. And then the other thing that we need to define when you are defining a game, um, is, is the player. So, so, so player is a function of state. And basically tells us who is in control, like, who is playing now. So in the game of chess, like, whose turn is it now. And then that is the function that, that you are going to define when we are formally defining, um, that game. Okay. All right. So, so let's look at an example. So we have a game of chess. Players are white and black. Let's say you're playing for white. So the agent is white, the opponent is black. And then the state S can represent the position of all pieces and whose turn it is. So, so that is going to what the state is representing. So whose player's turn it is and then the position of all pieces. So actions would be all the legal chess moves that player S can take. And then IsEnd basically checks if the state is checkmate or draw. That is what it is checking. Okay. So, so then what would the utility be? The utility will be, will be if you're, like, you're only going to get it when you win or when you lose or, or if there's a draw. So the way we are defining it is, it's going to be let's say, plus infinity if white wins because, because the agent is white and, and it's going to be zero if, if there is a draw and then it's going to be minus infinity if black wins. Okay. Yeah. So, so that was all the things that we would need to define. Yes. [inaudible] What- why do we have, why do we have whose turn it is in the state. Uh, so that's one way of actually, like, extracting the player function. So, so the way you can define a player function is a player is a function of state. So the state already needs to encode whose turn it is. So you can kind of extract that from the player. You said the, the utility would kind of be negative utility for the p agent. Is that assuming that they're both taking the same actions the whole time? No. So, so, so this is turn-taking, right? So I take an action and then the opponent takes an action and then the agent takes an action. The opponent takes an action and then at the very end of the game then then you get the utility and then the opponent gets- gets the negative of that utility. But the actions could be very different. Policies could be very different. And we'll talk about how to come up with that. So why is that condition variable, so what happens if white wins, you get plus infinity, but if black wins, if black wins, you get negative infinity, but like, when you lose- you hav- you don't have zero-sum game. We'll talk about that next lecture actually a little bit. So, so I'm, I'm talking about zero-sum games here because the algorithms you are talking about are for zero-sum games. Like we are talk- going to talk about min- mini-max type policies. Where I'm minimizing and the agent is maximizing. So I'll get back to that if, if I haven't answered that. Like we can talk about it after the class but also next lecture, we'll talk about more variations of games. So- but for now, I'm assuming a bunch of simplifying assumptions about this game. The assumption is that like if white wins, it's negative infinity, but if white wins, black gets 0 utility, [inaudible]. [NOISE] Uh, yeah. So these utilities need to add up to 0. If white wins, maybe white gets 10, but black gets minus 10. So, so like they, they need to add up. Okay. All right. So and then kind of the characteristics of games that we have already discussed are two main things. One is that all utilities are at end state. So throughout this path you are not getting new utilities as opposed to like things like MDPs where we were, we were getting rewards like throughout the path. But here, like the utility only comes in at the very end. At the end state. And then the other thing about it is that different players are in control at different states, right. Like if you are in state, you might not be able to control thing- control things. It might be your opponent's turn and you might not be able to do anything. Okay? So those are kind of the two main characteristics of games. All right. So let's look at a game that you're going to play. All right. So the game is a halving game. So we start with a number N. And then the player- the players take a turn and they can do two things. They can either subtract 1. So they can decrement N, or they can replace N with N over 2. So they can divide or subtract. Okay? And the player that's left with 0 is, is going to win. Okay. So, so that is, that is the setup. Is that- is everyone following that? So, so let's try to formalize the game and then after that you want to figure out what is a good policy to, to do it. So, so right now let's just try to- let's just try to formalize this. So you know like what are all the different things for the model are. So let's just have a new file. We are going to define this game. So it's a halving game. Okay, so let's, let's get this. All right. So we're initializing with N. So we're starting with some number N. So what is our state? Our state is going to encode whose player turn it is and that number N. Okay. So we have a player. Let's say our players are either plus 1 or minus 1. That's how I'm defining like who's player it is. So the start state. Let's say player plus 1 place with N. So so that is plus 1 and N. And then we need to define its end. Okay. So what you do is end check. Well we take the state. We decouple it into player and number. And if the number is equal to zero then then that is when the game ends. That's our ending condition. Okay. How about utility? Well we get the utility at an end state. So again I take a state. I decouple it into player and number. I make sure that we are in, in, in an end state so we assert that number is equal to 0 because that kind of defines if you're in an end state or not. And then the utility I'm gonna get, if I'm winning I'm gonna get infinity. If I'm not winning I'm gonna get minus infinity. And the way I'm defining that here is by just doing player of times infinity. Because player- I'm the agent, I'm the player plus 1. The opponent is player minus 1. That how- like if, if minus 1 is winning I'm gonna get minus infinity. Okay? The actions that we can do is we can subtract 1, or we can divide. Divide by 2. I mean subtract and divide are the main actions. And player, this player function again takes the state. I'm gonna decouple the state into player and number and just return the player. That's how I know who's player's turn is. And then we need to define the successor function. The successor function takes a state and an action and tells us what state you're going to end up at. So again a state. I'm going to decouple that into a player and a number. And then the actions I can take are two things. I can either subtract 1 or I can I can divide by 2. So if I'm subtracting then I'm going to return a new state which is minus player cause now it's minus 1's turn or plus 1's- like it's minus whoever turn it is now. And then I'm gonna do number minus 1. If the action is divide, we're gonna return the new player which is minus player, and then number divided by 2. Okay? That is it. So, so we just defined this game, okay. Yeah. All right. So, so that was my game. We're gonna play this game in a little bit. But let's- quickly before playing it. Let's talk about what is a solution to a game. Like what are we trying to do in a game. So if you remember MDPs the solution to a game was was the policy. So a policy was a function of state. It would return the action that you need to take in that state. So similar to MDPs here we have policies. But, but, the thing is I have two players. So policy should should depend on the player too. So I have Pi of P which is the policy of player P. And I can define it similar to before. It can be a policy as a function of a state and it can return just an action. And this would be a deterministic policy. Like deterministically if I'm in state, the policy is going to tell me what action to take, okay. We can also define Stochastic policies. So what Stochastic policies would do is they would take a state and action and then they would return a number between 0 to 1 which is the probability of taking that action. So policy Pi of a state and action basically will return a probability of player P taking action A in state S. So, so if you remember the bucket example, like maybe half the time I would pick the number on the right and half the time I would pick the number on the, on the left. That would be a stochastic policy, right. I'm not deterministically telling you what the action is. I'm coming up with the stochastic way of telling you like what policy I'm following, okay? So we have deterministic policies. Stochastic policies. Like in our game we could follow either one of them. Under what case would you want a stochastic policy versus the deterministic policy? Uh, can you speak up? Yeah. Under what case would you want a stochastic policy versus a deterministic policy? So under what case do you want a stochastic policy versus a deterministic policy? Again, we'll cover that a little bit more next time depending on what games you are in. Like you have some properties of when stochastic policies are giving us some some properties and deterministic policies are giving us some other properties. Right now you're just defining them as things that could exist. And, uh, we could think our opponent is acting deterministically if, if you know exactly what they were doing. Sometimes I've no idea. Maybe you like I've learned it somehow and I have some randomness there. And then I'm going to use some stochastic policy for how my opponent is going to play against me. But we are going to apply the- like what we get out of the stochastic versus deterministic policy is a little bit more next time. Okay. All right. So okay. So now let's- okay so now that we know that it's the policy that we want to get. Let's try to, let's try to write up a policy for this game. And then I'm gonna define a human policy. And what I mean by that is this is going to come from the human. That means one of you guys or two of you guys. So, um, so I need two volunteers for this but let's quickly actually write this up. So what is a human policy? It's just going to get the input from the keyboard. So, so what I'm going to type up here is, is get the action from the keyboard. So get the input from the keyboard. And that is going to be the action that we are picking. Remember the actions are either divide or subtract. Subtract 1. And if action is valid then return that action. That sounds like a good, good pol- policy. Okay. So that is a human policy. So now what I wanna do is I wanna have like this game that they're actually playing against each other. So I want to have policies for my agent. My agent is plus 1. That's going to be a human policy. And for my opponent, I'm gonna say my opponent is also a human policy. So I just want two humans to play against each other. Okay. And the game is, let's say we are starting with 15. So our number that we're starting with is 15. Okay? All right, so that looks right to me. So how do we, how do we ensure that we are progressing in the game. So if you're in an end sta- if you're not in an end state you want to progress. So let's print a bunch of things here. Let's print out state. Okay. Let's get the player out of the state cause again the state encodes a player. Let's get the policy. Because we have defined these policies for both of the players so we can get the policy for whoever is playing right now. And then the action comes from the policy in that state. And then the new state you're going to end up at is just the successor of the current state and action. So th- I'm just progressing. So, so this while loop here just figures out what state we are in, what policy are we following, and where are we going to end up at and that's the successor function. Okay. And then at the very end I'm just going to print out the utility. So that's either plus infinity or minus infinity. And that sounds good. So, all right. So let's actually- All right. So who wants to play this? Okay that's one person. You're the agent. You're player plus 1. Opponent is three people [LAUGHTER]. I think you were first. By [inaudible] yeah. Okay so you're minus 1. All right so let's, uh, play this game. Is this large enough? Yeah. Okay. All right so player 1. Player plus 1. We are at number 15. Do you wanna, uh, decrement. Okay. So minus 1. So we are at player minus 1. We're at 14. What do you wanna do? Divide. Divide. Okay. You have a policy [OVERLAPPING] [LAUGHTER] [BACKGROUND] Minus 1. Divide. Divide. [LAUGHTER] [LAUGHTER] Yeah I don't really, yeah. So yeah so you kind of get the point, right, so wait, did I make you lose now? [LAUGHTER] Sorry. My bad. But you get the utility at the end and then basically you kinda can see this interface- actually does any- Oh I don't know. We don't have that much time. I was going to try like another pair but the code is online if you wanna play with it, just play with it. We will have one other version playing it with an automated policy later. Um, all right. So, okay. So we're back here. Let me close this. Um, all right. So we just saw how we can give some human policies and human policies playing against each other. And again, the policy, you give it a state and action. It gives you a probability or you give it a state and it gives you an action. So a deterministic policy is just an instance of a stochastic policy. Right? So if you have a deterministic policy, you can kind of treat as a stochastic policy where with probability 1 you're picking- you're picking an action. So, all right. So, so now we wanna talk about how we evaluate a game. So, so let's say that someone comes in and gives me the policy of an agent and an opponent, and I just want to know how good that was. And again if you remember in the MDP lecture, we started with policy evaluation. So in the MDP lecture, we started with this idea of someone gives me the policy, you just want to evaluate how good that is, and you're kind of doing it analogous to exactly that. Someone comes in and tells me that my agent is going to pick bucket A, that is what my agent is going to just do all the time. And someone comes in and says, "Well, my opponent is going to act stochastically and, and with probability one-half, give me one of those numbers." Okay? So, so these are the two policies that we are going to have. So the question is; how good is this? So going back to the, to the tree, the game tree, what is really happening is my agent is going to pick, uh, this one, right? Because he's going to pick bucket A. So with probability one, we are going to end up here, with probability zero we end up in any of these other buckets. And then my opponent is going to stochastically pick either minus 50 or 50. Okay? So if my opponent is picking minus 50 or 50, then the value of this node is just the, the expectation of that or it's just going to be 0. So 50% of the time it's minus 50, 50% of the times it's 50, then the value of this node is 0. And then if my agent is picking, picking A then, then the value of this node is going to be 0. Okay? So, so you kind of can see how the value is going to propagate up from the utility. So we had the utilities at the leaf nodes, but we could actually compute a value for each one of these nodes if I know what the policies are. Like if I know who's following what policies, I can actually compute these values and go up the tree. Okay? And so in this case, I can say a value of a- of the start state, if I'm evaluating this particular policy, is going to be equal to 0. Okay? All right. So someone gave me the policy, I evaluated the value at the start state. So in general, as I was just saying earlier and this is, this is similar to policy evaluation. This is similar to the case that someone gives me the policies and I'll evaluate wha - how good the situation is. And you can write a recurrence to actually compute that. So I'm going to write the recurrence here maybe. So you want to compute this value. And this value is evaluating a given policy and it's a function of state. Well, what is that going to be equal to? It's going to be equal to utility of S, if you're in an end state. So it's utility of S if we are already in an end state. Otherwise, I have access to the policy of my opponent and policy of my agent so I can just do an expected sum over all possible actions of S. Let's say that I am - if, if player S is agent, I'm looking at policy of agent, let's say its a stochastic policy times V of eval of the successor state. Successor of S and A. And this is if, if my player is agent. So, so if is player - I'm just gonna write is player of S is equal to agent. What happens if my player is opponent? Um, I'm gonna do the same thing. I'm just evaluating I have access to the po- policy of the opponent. I'm again just doing- going to do a sum over all possible actions on the policy of the opponent, this is given to me- someone gave this to me, of state and action times the value of the successor state. And S and A and this is the case that my player is the opponent. So this is a recurrence that we are going to just write and it's kind of intuitive. Again, we have seen this in research too. Like you start with the utilities at the leaf nodes and you just push that back up based on what your policies are and what your policies are telling you like which sides - like which, which edges of the tree you are taking with what probability. Okay? This makes sense? All right. Okay. So that was evaluating the game. But what if now I want to solve what the agent should do? Like I'm the agent, I care about doing - like figuring out what my Pi agent is. I don't know what my Pi Agent is. I need to figure out what sort of policy I should be following. And that kind of takes us to this idea of expectimax which is basically the idea of - if I'm in a scenario where I know what my opponent does, so I'm still assuming what - I know what my opponent does, what would be the best thing that I should be doing as an agent? Okay? What, what would be the best thing I should do? Like if you knew, like, in the bucket example, I was trying - I was acting probabilistically, what would you do? Pick the action that gives you the maximum value. So you'd pick the action that gives you the maximum value because you're trying to maximize your own, your own value. So, so then if that is the case, then this recurrence needs to change, right? This recurrence- the way changes is, I'm going to call this- that new value, so I'm going to just do everything on top of this, I'm not gonna be writing it. I'm gonna call this value, value of expectimax policy. Okay? So, so this value eval, I'm not evaluating anything anymore. I want to actually figure out what my agent should do. So I'm gonna call it expectimax. And if I know a policy of my opponent, I'm not changing anything here because I know the policy of my opponent, I'm just going to compute this. But now I want to figure out what the agent should do and what should the agent do? Well, the agent should do the thing that maximizes this value. So I'm going to erase this sum with the policy because I don't have that policy. And the agent should do the thing that maximizes this value over all possible actions. So this should remind you of value iteration. So if you remember value iteration in the MDP lecture, like we weren't evaluating things, right? We were trying to maximize our value. And that's kind of analogous to what we are doing here. We're trying to figure out what should be the policy that the agent should take that maximizes the value under the scenario that I know what the opponent does. So I still kind of know what the opponent does. So going back to this example, so let's say I know my opponent is acting stochastically. What should I do? So if my opponent is acting stochastically with probability one-half, then the values of each one of these buckets are going to be 0, 2 and 5. And I'm trying to maximize my own util- my own values. So I'm gonna pick the one that gives me five. And, and that's shown with this upward triangle I'm trying to maximize. So I'm gonna pick bucket C because I'm trying to maximize under this knowledge that the other agent is stochastically acting. Okay? And, and, and then you're calling this the value of expectimax policy and the value of expectimax policy from the start state is equal to 5. Right? Because that's, that's evaluating the thing I'm going to get. Question back there? [inaudible] Yes. This is assuming I know my opponent's policy and I'm, I'm following - I guess I'm maximizing my own, er, my own value knowing that my opponent is following this policy and what the opponent would do in expectation. Okay? All right. So and then this is the, this is the recurrence that we would get, we would just update the recurrence. So if the agent is, uh, playing then we maximize the value of expectimax. Okay? All right. So, okay, in general I don't know the policy of my opponent. Right? So in general, like, I know what gives me this pi opt. So if that is the case, then what should we do? So one thing that we could do is we could assume worst case. So, so one thing that you could do is you could be like oh the opponent is trying to get me in and they're going to play the worst-case scenario and they are trying to minimize my value. And, and that's the fair thing to do. And we are going to talk about if, if that is always the best thing we can do or not, a little bit later in the lecture. But for now, what we could assume that if I know nothing about my opponent, I can just assume my opponent is acting adversarially against me. So and that kind of introduces this idea of minimax as opposed to expectimax that we just talked about. So, so what would minimax do? So in the case of a minimax policy, what I'm, I'm assuming is I am this agent trying to maximize my, my own- my own value and then I'm assuming my opponent is acting adversarially. So my opponent is really trying to minimize my value. And what that means is from this bucket, I'm gonna get minus 50, from this one I'm gonna get 1, from this one I'm gonna get minus 5. And under that assumption, well, I'm going to pick the second bucket because that gives me the highest- the highest value. So, so that is a minimax policy. So how would I change my recurrence if I were to play minimax or I'm going to- I'm going to call it V of- so let's look at the V of minimax of a state. Well, the recurrence is going to be over minimax, V of minimax, so I'm gonna change that. If the agent is playing, the agent is still trying to maximize the value. So, so that is all good. What if the opponent is playing? The opponent is going to minimize, right? So I don't have access to pi opt. So what I'm gonna do is I'm going to remove this and say well the opponent is going to take an action that minimizes the value of the successor of S and A. Okay? And this is how you would compute the value of a minimax policy. Is this assuming that the adversarial agent consistently tries to minimize the utility of the agent? Yes. What happens when, um, the adversarial agent doesn't always go with that selection but also becomes stochastically. Yes. So that's a good question. So what happens like if the adversarial agent is not always adversarial, right? So in that case, you have another stochastic policy that kind of defines what- what the opponent is doing. And if you have access to that, you can do something similar to expectimax. If you don't have access to that maybe you would want to act worst-case and assume that they're always trying to minimize. But- but that's some prior knowledge that you have that allows you to- to act better or maybe evaluate, ah, the value better for wherever you stay. So we'll talk about evaluation functions a little bit in the lecture. And maybe you'll look back and form your evaluation function, okay? All right. So- so- so here the value of minimax from the start state is going to be 1, right? Does everyone see that? So I'm assuming my opponent is acting adversarially. So we have minus 51 and minus 5. If I am maximizing then the best thing I can get is 1. And then that's how we compute V of minimax, okay? And then there is really no analogy to this in MDP setting because in MDP setting you don't really have this game. We don't really have this opponent that's playing against us. And what happens is, is that this is a recurrence that you're going to get it which is what we already have on the board, right? Okay. So- so what would the policy be? So the policy is just going to be the argmax of this V of minimax. So if you want to know what the policy of your agent should be, that's Pi max. It's the arg max over v of minimax, over successor of that state. And if you want to know what's the policy of- of your opponent, that state S should be- well, that's argmin of- of b of minimax which is intuitive, right? So- so then that way you can actually figure out what the action should be, what the policy with the actual action should be, okay? All right. So let's go back to this example, this halving game. So what we wanna do is we wanna actually code up what a minimax policy would do in this setting. And maybe we can play with a minimax policy after that, okay? So what would a minimax policy do? So it's a policy, so it's going to be a function of states, so let's give it state. And you're going to just write this recursion that we have on the board. So- so we're recursing over to state. If you're in an end state then what are we returning? Just the utility, okay? So we're returning the utility of that state, and there was no actions. And then if you're not in an end state, then you are either maximizing or minimizing over a set of choices. So let's actually like create those choices so they can just call max and min on them. So the choices we're going to iterate over all actions that- that we have. And what is that going to be exactly? Well, that's going to be doing a recursion over the successor states. So we are going to recurse over the successor state. So recurse over succ- game.successor of state and action. And I'm going to return the action here too because I just want to get the policy later. And the successor- does this recursive function returns a state and action. So I just want to get the state from the first one and the action from the second one. Okay. So if player is plus 1 that's the agent, the agent should maximize the choices. And if player is minus 1, then- then that's the opponent, the opponent should try to minimize over these choices. And that's pretty much like this recursion that we have on the board, and- and that's our recursive function, okay? So we're going to recurse over- over our state and that gives us a value and it also gives us- gives us an action. So let's just print things out. So you can refer to them. So minimax gives us an action, and it tells us this is the value that you can get [NOISE]. All right. And then it's a policy, so let's just return the action. Okay. So now what I'm gonna do is, I'm going to say plus 1 agent is still a human policy, and then it's playing against a minimax policy. So all right. So let's- who wants to play with this? And it's a little scarier to play with the minimax policy [LAUGHTER]. Okay. All right. So let's do this. Python. All right. So you are the agent. So you're player 1. You're starting from 15. What do you want to do? [BACKGROUND]. So you just lost the game [LAUGHTER]. So- so why do I know you lost the game? Now it's player minus 1 playing, you are at 7. And minimax policy took action minus, er, and says action minus, um, and- and it also, yeah takes action minus. So we're at 6. And then the value of the game is minus infinity. So you're playing with a minimax policy, you're already getting minus infinity. So- so you just lost the game. Anyone want to try this again [LAUGHTER]. You want to try it again maybe. [BACKGROUND] Subtract. [LAUGHTER] Okay. So you- so you can win, right? So the value is infinity right now. And then yeah, so and then the minimax policy also did a minus. So we're at 13 right now. It's your turn, you're at 13 [BACKGROUND]. You just lost the game again [LAUGHTER]. So yeah, so minus infinity is- yeah actually you need to like alternate between them. I think that is the best policy. But play with this kind of get a sense of how this runs. The code is online. So just feel free to play with it and figure out, what is the best policy to use. All right. So- okay. So- so that was a minimax policy. And then this is kinda the recurrence that we get for a minimax policy. Now, what I wanna do is I wanna spend a little bit of time talking about, um, some properties of this minimax policy. And then we talked about two types of policy so far, right? We have talked about expectimax, which is basically saying, "I as an agent, I'm trying to maximize, but I know what my opponent is going to do. So I'm going to assume my opponent does whatever. And then I'm going to maximize based on that." So- so for example, I am following and I'm going to refer that to as Pi of expectimax, which means that the agent and everything in red is for the agent, everything in blue is for the opponent. So I'm gonna say the agent is following this policy which says, "I'm going to maximize assuming my opponent is doing whatever. And here I'm calling Pi 7 as like some opponent policy." It couldn't be like anything but Pi 7. So let's say that, opponent is playing Pi 7, I'm going to maximize based on that. And- and the value we just talked about is the value of expectimax. The other value we just talked about is the value of minimax which says, "I am the agent. I'm going to maximize assuming the opponent is going to minimize." And then the opponent actually is going to minimize and is going to follow pi min. Okay. So- so these are the two values we have talked about so far. I want to talk a little bit about the properties of this. But before that, let me- So weight to like kinda like mix the two together. And you say like just highlight the probability of piping the minimum for like an expected max. I give a probability distribution over like the actions, right? Like why don't we just take the action that like minimizes whatever our reward is and give it a higher weight, in Expectimax. Um- [NOISE] I didn't fully follow what policy you were referring to, actually. Is it- are you coming up with a new policy that you do- you're saying would be a better policy to [NOISE] between like expectimax and minimax in some sense? So this might- this, this table might, kind of, address that because it's, it's considering four different cases. It's actually not considering the two cases. So this might actually refer to what you're, what you're proposing. So, so let's actually go through this first and then maybe, like, if it doesn't answer that. So, All right, so, so I want to talk about the setting. So this table is actually not that confusing, but it can get confusing. So do pay attention to this part. Um, all right, so where do I wanna- maybe, maybe I'll write over there. So I'm gonna use red for agent. Where is my blue, my blue? On the floor? Hanging on the left. Left? Your right. My right, [LAUGHTER] okay, all right, [LAUGHTER] okay. And then I'm going to use blue for, um, and I dropped this. I'm going to use blue for, um, the opponent policy. Okay. So, so then for agents, we're are going to have Pi max. All right. An agent could play Pi max. What does that mean again? I'm going to maximize assuming you're gong to minimize. An agent could play Pi expectimax. Maybe the policy 7, I'm gonna put 7 here, which means I am going to maximize assuming you're going to follow this Pi 7. So this is a thing that the agents can do. [NOISE] Okay? And then there are things that my opponent can do. I'm going to write that here. My opponent can actually follow Pi min which is I'm just going to minimize, or my opponent could follow some other policy Pi 7. Let's say Pi 7 in the bucket example right now is, is just acting as stochastically. So half the time pick one number, half the time pick another number. Okay? So, so that is what we have. So I'm going to draw my- actually my tree so we can go over examples of that too. So this was the bucket example. They started at minus 50 and 50 in bucket A, 1 and 3 in bucket B, minus 5 and 15 in bucket C. Okay? So this was my bucket example. I'm actually going to talk about that. So- All right. So I'm gonna talk about a bunch of properties of V of Pi max and Pi min, which is what we have been referring to as the minimax value. Okay? So, so I want to talk about this a little bit. Okay? So the first property that, that we can have is, is that V of Pi max and Pi min, it is- actually let me go back to the next slide. It is going to be an upper bound of any order value of any order policy. Pi of- I'm going to just write Pi of expectimax for any other policy for the agent. Assuming that my opponent is playing as a minimizer. Okay. So, so what I'm writing, so what I'm writing here is, is that value is going to be an upper bound of any order value if my agent decides to do anything else under the assumption that my opponent is a minimizer. So my opponent is really trying to get me. If my opponent is really trying to get me, then the best thing I can do is to maximize. Okay? So, so that's kind of intuitive, right? That's an upper bound. Let's look at that example. So what is Pi- V of Pi mix- er, Pi max and Pi min? So, so we just talked about that, right? So if this guy is a minimizer, we're gonna get minus 50 here, 1 here, minus 5 here. If this guy is a maximizer, what is the value I'm gonna get? You'll get 1, right? I'm gonna go down here and then I'm gonna get 1. So V of Pi max and Pi min is just equal to 1. That is this value that is just equal to 1. Okay? What is this saying is that this is going to be greater than maybe the setting where my opponent- so my, my agent is following expectimax and my opponent is still doing Pi min. So, so what would this correspond to? What will this value correspond to? So this is a value which says, well, I'm going to take an action assuming my opponent is acting stochastically. If my opponent is acting stochastically, I'm gonna get 0 here, I'm gonna get 2 here, and get 5 here. If I'm assuming that and I'm trying to maximize my own, my own value, which route do I go? I'm gonna go this route. But it turns out that my opponent was not doing that. My opponent was actually a minimizer. So if my opponent was actually a minimizer and I went this route, my opponent is going to give me minus 5. So the value I'm going to end up getting is minus 5. So this is equal to minus 5. This is equal to minus 5. Okay? So, so far I've shown that this guy is greater than this guy. okay? All right. So that's the first property. First property is if my opponent is terrible and is trying to get me, best thing I can do is to maximize. I shouldn't do anything else. Okay? The second property is, is that this is V of Pi max, again the same V, V of Pi max and Pi min is now a lower bound of a setting where your agent is maximizing assuming your opponent is minimizing. But your opponent was actually not minimizing, your opponent was following Pi 7. So, so what this says is if you're trying to maximize assuming your agent, your, your, your opponent is always minimizing, then, then you're doing- like you'll come up with like a lower bound and if your opponent ends up doing something else, you can always just do better than this lower bound. Okay? So what is, what is this V equal to or we just showed that is, that is one, right? That is this value. Okay? What does this correspond to? So this is value of Pi max which is I am going to assume you are trying to get me. If I'm going to assume you are trying to get me I'm gonna go down this route because that is the thing that gives me the highest, the highest value. But you are not trying to get me, you are following Pi 7. So if you're following, following Pi 7, you're just going to give me a half the time 1 and half the time 3 and that actually corresponds to the 2, and I'm going to get value 2 instead of value 1. So this is actually equal to 2 in this case. And this corresponds to this value in the table which is again the agent is following a maximizer assuming the opponent is a minimizer. Opponent was not a minimizer, opponent was just following Pi 7. And this is just equal to 2 . Okay. So so far, the things I've shown are actually very intuitive. They seem a little complicated but they're very intuitive. What I've shown is that this value of minimax, it's an upper bound. If you're assuming our, our opponent is a terrible opponent, now it's going to be an upper bound because the best thing I can do is maximize. I've also shown it's a lower bound if my opponent is not as bad. So, so that's what I've shown so far. A question. So here the opponent's policy is completely hidden to the agent. Yeah. So here, like, because- Yeah, the agent actually doesn't see the opponent- where the opponent goes, right? Even in the expectimax case, it thinks the opponent is going to follow Pi 7, but maybe the opponent follows Pi 7, maybe not. Right so, so like when we talk about expectimax and minimax, it's always the case that the opponent doesn't actually see what the opponent does. But the opponent can think- the agent can think what the opponent does, okay? And I'm going to talk about one more property. And this last property basically says if you know something, actually goes back to your question, if you know something about your opponent, right? If you know something about your opponent, then you shouldn't do the minimax policy. You should actually do the thing that has some knowledge of what your opponent does. So, so that basically says this- we Pi max and some Pi of opponent, you know something about Pi opponent. You know that opponent is playing Pi 7. That is going to be less than or equal to the case where you are following the Pi of expectimax of 7, uh, and the opponent actually follows Pi 7. Okay. So what does this last equality- inequality saying? Well, it is saying that the case where you're trying to maximize and you think your opponent is minimizing, but your opponent is actually not minimizing the, value of that is going to be less than the case where you're maximizing under some knowledge of your opponent's policy and your opponent's policy actually ended up doing that. Okay? So, so the first term is always the agent. The second term is always the opponent, right? So this value we have already computed, that- that's equal to 2. This value, what is this value saying? It is saying you are going to maximize assuming your opponent is stochastic. So if I'm assuming my opponent is stochastic, then I'm assuming that this is 0, this is 2, this is 5, right? I'm trying to maximize. So which one of my routes shou- should I go? I should go this route because that gives me 5. So this is the agent thinking the opponent is going to be stochastic, thinking he's going to get 5. And it gets here and the opponent actually ends up following Pi 7 which is a stochastic thing. So, so we are actually going to get 5. So, so this guy is equal to 5. And this is the last inequality that we have, which is V of Pi expectimax of 7, and Pi of 7 is greater than or equal to V of Pi max and Pi 7. We just showed this is equal to 5 for this example. Okay. All right. Question. [inaudible] The actions of the opponents always whether or not the [inaudible] [NOISE]. Uh, so- So if, if you, so if you know something about the stochasticity, that's in order. Like here, I knew that the opponent was following the stochastic policy of one out, one out. I might have known that the opponent is following a deterministic policy in- and always is picking the left one. So I could have like followed, like same expectimax policy under that knowledge. It could be anything else, but the whole idea of expectimax is, I have some knowledge of what the policy of, of the opponent is, it could be a stochastic policy, it could be a deterministic policy under that, how would I maximize? Does that mean that like transitively, that the bottom right is greater than the bottom left always? Yeah. So the question is do we have- Yeah. So we have what like this inequality, so transitively, this guy is always greater than this guy. And that kinda makes sense, right? Like we're saying, like if you're following expectimax, so this last one kinda makes sense, right? It's, it's basically saying if you're following expectimax and you know something about your opponent and your opponent actually ended up doing that, though, though your value should be greater than pretty much anything, right? Because you knew something about the opponent, you played knowing that, having that knowledge. Yes. When you say knowing something about the opponent, is that just knowing that it's asked stochastically or know what it's gonna take? [NOISE] It's knowing what they're going to take. Right? Like here, I knew what they'll point out. I knew that half the time they're going to take this one, half the time you are going to take the other one, and then I use that knowledge, right? Yeah. So you know exactly this? [OVERLAPPING]. Yes. Yeah, yeah, the expectimax. Is the expectimax policy given that your opponent is following Pi min policy- Given that, sorry. Given that your opponent is following Pi min. Is it- do you maximize it? So the expectimax policy is, is this policy when here we have a sum. The expectimax policy, uh, assumes your opponent is following Pi opponent and assumes that it has access to Pi opponent and so it ends up doing this sum over here. Yeah. If Pi opponent is Pi min? Like- Uh, if Pi oppo- I see what you are saying. So you're saying if Pi opponent is actually Pi min, then do they end up being equal to each other in some sense? So yeah, I guess so. Yeah. So if, if you know that the oppo- it becomes minimax, right? If you know your opponent is, is following min, as acting as minimizer or just like call that minimax. All right. So I'm gonna move ahead a little bit. All right so- and then, this is like what we have already talked about. Okay. So a few other things about modifying this game. So, so we have- okay so we have talked about this game, we have talked about properties of this game. There is a simple modification one can do which is, bringing nature in. So there was a question earlier which was like, is there any chance here? And then, yeah, you can like actually bring chance inside here. So, so let's say that you have the same game as before, you're choosing one of the three bins. And then, after choosing one of the three bins, you can flip a coin and if heads comes, then you can move one bin to the left, with wraparound. So what this means is 50% of the time, tails comes, you're not changing anything, you have this set up. 50% of the time you get heads. And then, in those settings you're just gonna pick like a neighboring bin as opposed to your original bin, okay? So, so the- you're adding this notion of chance here and, and it's kind of acting as a new player, so, so it's not actually the making things that much more complicated. So, so what happens is in some sense we have a policy of, of coin which is nature here, right and policy of coin is, half the time I get 0, I don't change anything, half the time I just get the neighboring bin as opposed to my main bin. And then I get this new tree where, where I have like a whole new level for what- where the chance plays. So we have- now we have max nodes, we have min nodes, we also have these chance nodes here. And the chance nodes again, like sometimes they take me to the original bucket and then 50% of the times they take me to a neighboring bucket, okay? But, but the whole story like stays the same, like nothing changes. You can, you can still compute value functions, you can still push the value functions further up. It's the same sort of recurrence. Nothing fundamental changes. Just- it just feels like there are three things playing now, okay? So, so then this is actually called expectiminimax, so a value of expectiminimax here, in this case for example, is minus 2, because there is a mini node for the opponent, there is an expectation node for what nature does, and then there is a max node for what the agent should do. That's why it's called expectiminimax. And then, you can actually compute the same value. So when the game is working out, so there's like two players. I pick a bin then you flip a coin, and then shift it left or not shift it left, and then I get to pick the number? Yes. Well, not you, well the opponent. The opponent. So yeah. So, so there are still two players and then the third coin thing. Yes. [inaudible] All right. So, so yeah. So the way to formalize this is you have players, so you have an agent, you have an opponent, you have coin, and then the recurrence changes a little bit I guess. So, so what happens is, the recurrence that we have had for minimax was just the max and min and it would just return us the utility if you're in an End function and in an End-state. Now, if the- if it is the coins term, we just do a sum over, uh, an expected sum of the policy of the coin which is what we were doing in expectiminimax. But, but we just have like a new term for when coin placed. So, so everything here kind of follows naturally in terms of what we were expecting, okay? All right so the summary so far is, uh, what we've been talking about max nodes, we have been talking about chance nodes, like what if you have a coin there and then also these min nodes. Um, and, and basically we've been talking about composing these sort of nodes together and creating like a minimax game or, or an expectimax game. And then value function, uh, we- is- you just do the usual recurrence that we had been doing in this class from the expected utility to, to- from the utility to come up with this expected utility value for all the nodes that we have. So there might be other, other scenarios that you might wanna think about for example, for your projects or like in, in general there are other variations of games that you might wanna think about. So what if, like the case that you are playing with multiple opponents? Like so far we have talked about like a two-players setting where we have one opponent and one agent but what if you have multiple opponents, like you can think about how the tree changes in those settings. Uh, or for example, like the taking turns aspects of it, like is it sim- if, if the game is simultaneous versus your turn-taking, uh, or like you can imagine settings where you have some actions that allow you to have an extra turn. So, so you have two turns. Uh, and then the next person takes t- takes a turn. So, so you should think about some of these, some of them come up in the homework. So, uh, think about variations of games in general. They are kind of fun. So to talk a little bit about the computation aspects of this. Um, so this is pretty bad. [LAUGHTER] Right, we talked about a game tree which is similar to tree search. So we are taking a tree search approach. Uh, if you remember tree search, like the algorithms we're using, like if you have branching factor of b and some depth of d then, then in terms of time it's exponential in order of b to the 2d, in this case. So I'm using d for the number of- how do I say this? So, so it's 2d because the play- the, the agent plays and then the opponent plays, so that's how I'm counting it. So every, every 2d like you have 2d plies but d depth. Does that makes sense? All right. And then in terms of space, it's order of d in terms of time, it's exponential that's pretty bad. So for a game of chess for example, the branching factor is around 35, depth is around 50. So if you compute b to the 2d, then it goes in the order of like number of atoms in the universe, that's not doable we should- we are not able to use any of these methods. So, so how do we make thi- things faster? So we should be talking about how to make things faster. So there are two approaches that we are talking about in this class to make things faster. And the first approach is using an evaluation function. So, uh, using an evaluation function and what we can do is you can use domain specific knowledge about the game to define almost like features about the game in order to approximate, like the, the value th- this value function at a particular state. So I'm going to talk about that a little bit. And then, another approach is this approach which is kind of simple and kind of nice which is called alpha-beta pruning. And, and the alpha-beta pruning approach, basically gets rid of part of the tree if it realizes you don't need to go down that tree, that part, that part of the sub-tree. So, so it's a pruning approach that doesn't explore all of the tree only explores parts of the tree. So, so we're going to talk about both of them. All right. So evaluation functions. So let's talk about that. Okay. So the depth can be really like the breadth and depth of the game can be really large. That's not that great. So one approach to go about solving the problem is, is to kind of limit the depth. So instead of like exploring everything in the tree, just limit the depth and, and get to that particular depth. And then after that, when you get to that depth just call an evaluation function. So, so if you were to search the full tree, this was the recursion that, that we had like we have talked about. This was like if you're doing a minimax approach this is the recursion that you gotta do. You gotta go over all the states and actions and, and go over all of the tree. But if you're using a limited depth tree search approach, what you can do is, you can basically have this depth d and then decrement d every time you go over an agent and opponent, like every time you go down the tree and at some point d just becomes 0, so you get to the po- some particular depth of the tree and when d becomes 0, you're gonna call an evaluation function on the states that you get, okay? And this evaluation function is almost of the same form of, like future costs when we were talking about search problems, right? So, so if you knew exactly what it was, then, then you were done, but you don't know exactly what it is because if, if you knew that we were to solve like the whole, uh, tree search problem. But in general, it can have some sort of weak estimate of, of, um, what, what the future costs would be. So, um, yeah. So, so an evaluation function Eval of s is a weak estimate of V minimax of s. So it's a weak estimate of, of your value function, okay? All right. So, so analogy of that is future costs in search problems. So how do we come up with an evaluation function? So we do it in a similar manner that we had visited in the learning lecture, where we're coming up with, with features and, and weights for those features, right. So, so if I'm playing like chess, and like the way we play it, right, like we think about a set of actions that we can take and where we end up at and, and based on where we end up at um, then you kind of evaluate how good that board is, right. You have some notions of features, and how good looking- like how good that board would be from that point on. And that allows us to evaluate what action to pick, right, like when we play chess that's kind of what we do. We pick a couple of actions and we see how the board would look like after taking them. An evaluation function kind of does the same thing, it tries to figure out what are the things said we should care about in a specific game, in this case in chess and tries to give values to them. So, so it might be things like the number of pieces we have, or mobility of those pieces, or if our king is safe, or if we have central control or not. So, so for example, for the pieces what we can do is, we can look at the difference between the number of pieces we have between what we have and what our opponent has. So number of kings that I have versus number of opponents that I have. Well, that seems really important thing because if I don't have a king and our opponent has a king then [LAUGHTER] I've lost the game. So, so you might put like a really large weight for that and you might care about like differences between the number of pawns, or number queens and other types of pieces that you have on the board. So, so that allows you to care about- to think about how good the board is, or number of legal moves that you have and the number of legal moves that your opponent has, and then that gives you some notion of like mobility of that state. Okay. All right. So um, so summary so far is- yeah, so this is pretty bad, order of B to the 2D is pretty bad, and an evaluation function basically tries to estimate this V minimax using some domain knowledge. And unlike A star, we actually don't have any guarantees in terms of like error from these sort of approximations. So um, but it's an approximation, people use it, it's pretty good. We will talk about it a little bit later next time when, when we think about like how- what sort of weights we should, we should pick for each one of these, for each one of these features. So you should think learning when you think about what are the weights we are using. All right. So- okay, so now I want to spend a bit of time on alpha-beta pruning because this is- yeah, important. Okay. So alpha-beta pruning. Yeah. The concept of alpha-beta pruning is also pretty simple, but I think it's one of those things that was- it was kind of that table you should pay attention to, to kind of get what it is happening. All right. So, so let's say that you want to choose between some bucket A and bucket B. Okay, and you want to choose the maximum value, and then you know that the values of A fall into like 3 to 5, and the values of B fall into 5 to 10. So, so they don't really have like any, any intersections between each other. So, so in that case, you don't really care about your, your- if you're picking a maximum right, you shouldn't care about your bucket A, or rest of your bucket A right, because you already know that you are above 5, you are happy with B, you shouldn't even look at A. So, so kind of the, the underlying concept of, of um, alpha-beta pruning is, is maintaining a lower bound and upper bound on values, and then if the intervals don't overlap then basically dropping part of the sub-tree that you don't need to work on because there is, there is no overlap between them. Okay. So here's an example, so let's say we have these max nodes and min nodes and you're going to go down and see 3, and then this is a min node so, so you're going to get 3 here. So when I get to the max node here, right, I- what, what, I know is that the max node is going to get 3 or higher, right. That- that's one thing that I would know without even looking at anything on the, on the other side, without even looking at the sub-tree on the left, I already know that this max node should get 3 or higher, right. Does everybody already agree with that? Okay. So, so then when I go down to this min node and I see 2 here, right, I know this is a min node, it's going to get a value that's less than or equal to 2. Less than or equal to 2 does not have any interval with greater than or equal to 3, so I should not worry about that sub-tree. Does everyone see see that? So maybe you'll like let me draw it out here. [NOISE] So that's kind of like the whole concept of what happens in alpha-beta pruning. So I have this max node, this was three, this was what- five. I found that the guy is 3, this is a max node. Whatever it gets, it- it's going to be greater than or equal to 3 because, because it's already seen 3, it's not gonna get any value less than 3, right. So, so we know whatever value we are going to get at this max node is going to be 3 or higher. Okay. Then I'm going to go down here, and then I see two here, right, It's a min node whatever it gets is going to be less than or equal to 2. So less than or equal to 2 is the value that's going to get popped up here. I already know less than or equal to 2 has no interval with 3 or greater. So I don't even need to worry about this like I, like I can completely ignore this side of the 3, I don't need to know whatever is happening down here, I don't even need to look at that. Okay. Because, because I- like this value should be greater than and equal to 2. Yes. All right, we should get a value greater than or equal to 8. Sorry. [inaudible] It's minimum- so it's a minimum, it's a minimum node, right. So it's going to be less than or equal [NOISE] to- right. Yeah. It's a min node, so I still have to, if I see 10 here or 20 here, like I'm not going to pick that, like it's 2 or lower. All right. So yeah- so if it is 10, or 100, or whatever sub-tree it is there like we're not going to look at that. So, so that, that is kind of the whole concept. Um, All right. So- okay. Let me actually go to this slide, I think this would be. So the key idea of alpha-beta pruning is as we are- with the optimal path is going to get to some leaf node that has some utility, and that utility is the thing that is going to be pushed up, right, like- and then the interesting thing is if you pick the optimal path, the value of the nodes on that optimal path are all going to be equal to each other, right, like they're the- basically the utility that you are going to get pushed up all the way to the top. So, so because of that like we need to have like these, these like we, we can't have settings where we don't have any intersections between the intervals because we know if this is, if this were to be the optimal path, the value on this node should have been the same as the value at this node- the same as the value at this node and, and so on. So if they don't have any intervals then no way that they would have the same value, and no way for that path to be the optimal path. Okay. So, so that's kind of the reason that it works because the optimal path you're going to have the same value throughout. Okay. So-all right so how do we actually do this? So the way we do this, is we're going to keep a lower bound on max nodes, so I'm going to call it that a_s. Let me [NOISE] get this up here. So we are going to have a_s which is a lower bound on max nodes. So we're going to keep track of that. We're also going to keep track of b_s, which is an upper bound on min nodes. Okay. And then if they don't have any intervals, we just drop that sub-tree. If they have intervals we just keep updating a_s and b_s. Okay. So, so here's an example, so let's say that we start with this top node. Somehow we have found out that this top node should be greater than or equal to 6, right. Somehow I know it should be greater than or equal to 6. Okay. So that is my a_s value. So my a_s is equal to 6, it, it is, it is going to be a lower bound on my max nodes. I know the, the valued- optimal value is going to be something greater than or equal to 6. Okay. Then somehow we get to this min node, and then we realized that this min node should be less than or equal to 8. So you're here, let's say 8 is here, we still have some interval, we're all good, right, so b_s is going to be equal to 8, right, we have an upper bound on the min node, and that tells us that upper bound is 8. So the, the valued- optimal value- the value and optimal path is going to be less than or equal to 8. Okay. So far so good. Then somehow I found out that that one is greater than or equal to 3. Greater than or equal to 3 should be fine, right, greater than or equal to 3 is still greater than or equal to 6, my a_s in this case, I'm going to call this S1, S2, and S3 is equal to 3, right, because I know I need to be greater than or equal to 3. But like 6 already does the job, right, like I don't need to worry about that 3. So, so that's all, good so far. And then for this last node, I am at this min node, and I realize that b_s4, I'm going to call it b_s4, is equal to 5. And what this tells me is that your value should be less than 5 and less than 5. So I'm going to update less than 8 to less than 5. And now, we don't have any in- intervals. So what that tells me is that path is not going to be the optimal path, because there is no intervals. So- so we're not going to find this- this one number that is going to be the utility. And what that tells me is, I can actually ignore that whole sub-tree because- because that's not going to be in- my- my optimal path, I can- I can get rid of it, I can ignore it, okay. Yes. We also ignore 3 if, uh, the beta is equal to alpha, if we already have something else, is that not the same thing? Yeah. So- so we're ignoring 3 in a different way. I- I- so- so yeah- so we're ignoring the value of 3 because this is already encoded here. But we're ignoring the subtree of 5, like I'm not exploring it. Like I need to explore things after the 3 already, because I- like- like- like with the 3 we already had an overlap with the Beta. So you're looking at- with the b value- we are looking at the overlap between your upper bound of min node and lower bound of max node. So that interval is the interval you're making sure it still has values in it. One example of, uh, if the two or three extend; do you just ignore them anyway because you already had something else that's- that's [OVERLAPPING] is that optimal? Yeah, yeah. So, uh, yeah, I think so. Yeah, so- so if you already have like, if 3 were 2, is that what you're saying? Yeah so- so- th- you want to have non-trivial intervals basically, yes. Yeah. So like if- if- if- it is the same value- you still- yeah, you don't have non-trivial intervals. And- and yeah question. I was wondering how we got 6 and 8 and 3. Oh, this is an example of that, imagine somehow [NOISE]. But we- we will talk about some examples whe- where we get them. So I'll talk about one more example where we actually like get these, but for now just assume somehow we have found this. Yes. Um, on the top example, I don't understand why, uh, 3 is an upper bound or 2 is a lower bound. So, um, so the- the actual values, um, I'm not showing a full example here. So the actual values are coming from somewhere that I'm not talking about yet but- [OVERLAPPING] [inaudible] Oh, the one at the top. Okay, oh sorry. Yeah. So the one at the top right? So- so this is a min node, a min node, this is a max node, right? So at my min node, I found out that minimum between 3 and 5 is 3, right? So max node is maximizing between 3 and a bunch of other things. That- that's what it is supposed to do, right? So it's maximizing between 3 and a bunch of other things, then it's at least going to be 3. It's not going to be 2, there is no way for it to be 2 or it's not going to be 0, right? Because it's- it's going to take maximum of 3 and something else. So that's why I'm saying, well this value whatever I'm going to get at this max node is going to be greater than or equal to 3. Does that make sense? So now I come down here, and I see like, I see this 2; this is a min node. So the value here is going to be the minimum between 2 and whatever is down this tree, right? So it is going to be at least, uh, I'm very bad with that, the least, and the most. It's going to be- [LAUGHTER] it's going to be 2 or lower. Let me just use that. So- so what we are getting here is going to be 2 or lower, right? So I'm either going to get 2 or 1 or 0 or- or all that. And that's the value that's going to be pushed up here, right? So that's the value that's going to go down here, it's going to be a value that is 2 or lower. So if I'm maximizing between 3 and something that is 2 or lower, then 3 is enough. And I can like, kind of figure that out based on these intervals and don't look at this side of the tree. Like- like once I've- I've seen these two, I already feel there is no- no trivial interval between a value that's greater than 3 and a value that's less than 2. So I can just not worry about stuff down there. Okay. All right. So one quick other implementation, I think is we talked about these A's, A values and B values. You can- on- keep track of only one value. And that would be this Alpha value and Beta value, where Alpha value is just -I'm going to illustrate it here. Alpha value- let me get it right. So Alpha of s is the max of a_s for all these s primes that are less than s. Yeah. So- so- is what this basically says is, remember like when we saw 3 we said, "Well, that's already included, like we already knew that." That's kind of the same idea. So Alpha of s is just going to be one value. In this case, it's just going to be 6, because like when I see 3, like I don't really care about that 3, right? Like I already know I'm greater than 6, knowing that I'm greater than 3 is not adding anything. So we keep track of one value; Alpha of a- al- Alpha of s. In this case, Alpha of s is just equal to 6. And then similar thing for Beta. We're going to keep track of Beta of s, and Beta of s is just minimum of b_s's. And then, what I'm writing here is just the ordering of the nodes that you have seen. So- so Beta of s is 5. And then, you're looking at the intervals like Alpha of s, uh, and s- Alpha of s and above, and Beta of s and below. And if those intervals don't have any trivial intersections, then you can- you can prune part of the tree. Okay. So- so this is more of an- an implementation thing instead of keeping track of all these a_s's and b_s's just keep one number, one Alpha and one Beta. Okay. All right. Okay. So let's look at one- one other example. Uh, so all right. So I'm going to just do this example real quick. Okay. So you're going to start from some top node, we're gonna go to this node, this is a min node between 9 and 7. Between 9 and 7 right? So it's a min node, I'm going to get this guy; 7. I'm going to realize that this max node is going to be something that's at least 7, right? It's going to be something that's greater than or equal to 7. So my Alpha of s is going to be 7 right now. I know whatever value I'm going to get is going to be 7 or higher. Whatever value this start node is going to get, It's got to be 7 or higher, okay? So now I come down here, I am at a min node. I see a 6 here, right? I go here, it's a min node, so whatever we get here, is going to be less than or equal to 6, right? So it's going to be 6 or something that is lower. That tells me my Beta of s is equal to 6. That tells me whatever I am getting in that min node is going to be 6 and lower. That doesn't have any intersections with my Alpha of s. So I can just not do anything about this- this branch. Like I don't- like I don't need to go over like- like I know like all these other things like, I can kind of ignore like this whole branch. Okay. All right. So now I go back up. I go down here, I'm at the min node. So remember the way we were computing these Beta values, were based on the nodes that we have seen previously. So I have a new Beta now because I'm done with this branch, right. So I- I need to get here. Here I have a min between- what is it? 8. This Is 8? 8 and 3. So okay. So- so I see my- maybe let me just write 8. I see my 8 here, it's a min node, so it's going to be less than or equal to 8. So my new Beta value is going to be 8. My Alpha is still 7 because that's for my top node. So its 8 or lower. We do have an interval, overlapping interval, 7 to 8. Everything is good. So I actually need to go and see what this value is. This value is 3, so I get 3 here, or like, it's exactly equal to 3. So that updates my Beta from 8 to 3. We have already explored that part of the tree anyways, but 3- you don't have an interval. If there were a bunch of things below this three like I- I like when I somehow decide, like I wouldn't need to explore it, but we don't really have that. And then we just find that our optimal value is 7, so we just return 7, okay. And we didn't explore this giant middle part of the tree. Okay. One more slide and then I'll- two more- two more quick, one quick idea. Okay. So [LAUGHTER] All right. So the order of things actually matters, so- so that the only thing I want to mention about this idea of pruning is- is the order of things matters. So- so when we look at this example, remember we didn't explore anything about the 10, because we already knew that this value needs to be greater than or equal to 3. These are my buckets, right? If I swap the buckets, like if I just swap the order of buckets, I move the 2-10 bucket to this side, 3-5 bucket to the other side, I wouldn't be able to do that. I actually need to explore the whole tree, because my Alphas and Beta wouldn't have the same properties. So the order that you are putting things on the tree actually matters and- and you should care about that. Um, so worse case scenario, our ordering is terrible, so we need to actually go over the full tree that's order of b to the 2d. That's the worst-case scenario. There is this best ordering where you don't explore like half of it. So- so you can- like if you- if you had- if you have a tree where you can explore up to depth 10, then with the best ordering, you can actually explore up to depth like 20. So- so that's a huge improvement actually. Uh, so the best ordering is going to be order of b to the d. And then random ordering turns out to be pretty okay too, so random ordering would be order of b to the 2 times 3 fourth times d. So even if you had the random ordering, it would be better than the worst case scenario. And then, well, how do you figure out what is a good bordering- ordering? Well, we can have this evaluation function. Remember you- you are computing the evaluation function and- and what you can do is, you can order, uh, your- so for max nodes, you can order the successors by decreasing evaluation function, and therefore, min nodes you can order the successors by increasing the evaluation functions. That allows you to prune as much things as possible. All right. So with that I'll see you guys next lecture talking about TD learning. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Bayesian_Networks_1_Inference_Stanford_CS221_AI_Autumn_2019.txt | All right. Let's get started. Long time no see. I'm excited to be back and tell you guys about Bayesian networks. Um, so before we dive in, I wanted to do a few announcements first. Um, there's four things that should be on your radar. So the scheduling homework is due tomorrow. Hopefully, you guys are well aware of that. Um, the car assignment is, uh, released today, and it'll be due next Tuesday. So there's some, um, conceptual challenges here, especially if you're not to- up to speed on your probability. Uh, the section, uh, Thursday will, uh, really help you go over that. So please come to that. Um, then there is a final project, ah, you guys have- hopefully, have all received your feedback, uh, for your proposal and are actively making changes. So just to make sure that you guys are making progress, there's a progress report that is due, um, next Tuesday. And for this one, the, the guidelines are all on the, the website but just to kind of reemphasize, um, especially if you didn't, uh, manage to get a baseline or Oracle, we- we really expect that you to have that now. And also, we expect you to have some sort of preliminary results with, you know, some sort of implementation of your actual, um, [NOISE] procedure or algorithm or model. And definitely some description of what that is, and be as concrete as possible, um, as you can. Um, and finally, the exam is in about, uh, two weeks. Um, I would start, uh, looking at that. Um, and the- actually the best way I think to prepare for the exam is to look at the old exam problems because there is a certain style, um, that you have to kind of get used to when taking the exam. So I know this is a busy time, there's a lot of things going on. But hopefully, um, you guys will manage. Yeah? Progress is due Tuesday or Thursday? It's Tuesday, I believe. But I could be wrong. [inaudible] Uh, Tuesday but [inaudible] Yeah. Let's say it's Tuesday. Whatever the website says. It's whatever the website says. Oh, that says Thursday? Okay. Well, then we'll defer to the website on that. Okay. Um, okay so the next, uh, agenda item is the Pac-Man competition. So many of you, uh, worked hard to submit, um, various entries into this competition. In the end, only three could, uh, make it to, uh, the top three. So, um, here are the winners of the Pac-Man competition- because out of town but if- in the audience, uh, maybe you guys could come down. Let's give them a round of applause. [APPLAUSE] And we have these, uh, um, prizes [NOISE] which are Pac-Man themed, uh, Cups [LAUGHTER], um, filled with candy in case you didn't get enough for Halloween. So there you go. [NOISE] Congratulations. Thank you. Do you guys want to say a little bit about what was your secret sauce? Sure, called Pacman and um, actually, the fourth one is the stupidest of all. And the third one is like, um, super messy. [NOISE] I had like all of these [inaudible] extracted from like the food, the capsule, the hunting ghost, and the scaredy ghost [inaudible] and all that stuff. But it actually turned out to be not as useful as a very simple [NOISE] method that- which is similar to how everybody plays Pac-Man if there's a hunting- uh if there's a scared ghost, then go chase it. [NOISE] If there isn't then go for the capsule [NOISE] or else look for the food and dodge the hunt- uh, the hunting ghost. And, um, also, um, I changed the- I think the distance to, um, [inaudible] algorithm because the Manhattan distance is different from, um, the [inaudible] [NOISE] algorithm. So lesson is, keep it simple. Okay. [inaudible]. I recently experienced. There was a lot of variations that I went through. And I know mine was best due to ending up being a very simple model [inaudible] the policy, you want to keep the ghost Every once in awhile, the scared ghost is in play. And after, uh, eating all the capsules get included as good as possible for you to get [inaudible] for tracking down at the speed of DFS search, which I use dynamic programming so that everybody [inaudible] transitions and always catch that and have that robot- Great. - [inaudible]. Yeah. Okay. Well, great. Well, congrats again. [inaudible] [LAUGHTER] [APPLAUSE] Okay. All right. So keep it simple I guess is, uh, is a good, uh, lesson. Okay So back to our regular programming. Um, last week, we started talking about factor graphs. Just a quick review of what factor graphs are. Factor graphs consist a set of variables. These variables could denote colors of provinces of Australia or locations of, uh, objects at different time steps. Factor graphs also include a set of factors which depend on certain sets of variables, and these factors are meant to specify, uh, preferences or constraints on what values are good for these variables to take on. And the weight of an assignment is the- simply the product of all of the factors, [NOISE] right? So there's this theme that comes up in this class which is- I call it, uh, specify locally and optimize globally, right? So it's very easy to think about how two variables might interact and how you want something local, um, to happen. And these are defined in terms of the factors. But what you care about is some globally optimal solution. So the weight is a global function of all the var- assignment to all the variables. And last time, we talked about various different types of algorithms for finding the maximum weight assignment including backtracking search, beam search, Gibbs sampling, and so forth and so forth. Okay. So, um, one example we looked at was object tracking. And in this example, we have a set of variables corresponding to the location of, an unobserved object at time step i. Um, and we looked at two types of factors that captured where this object might be. There's transition factors which capture the intuition that across two successive time steps, the object shouldn't move if you can't teleport, that has to remain close. And tran- observation factors then incorporate the information from the sensors. At each position, there's going to be some factor that kind of, uh, encourages the position to be similar to what the sensor reading was. So sensor readings are noisy so it's not a hard constraint but it's a- but a soft constraint. Um, and last time, we saw, uh, this, uh, demo where you can define the factor graph and you clicked Run. And you see all the factors which are represented in these, uh, tables. And when you multiply everything together, you get, um, for every joint assignment to all the variables some number that corresponds to how good that w- uh assignment was. And if you look at the maximum weight assignment, that's what the answer you would, uh, return is. Okay? So so far so good, and you can- with this framework, you can do a lot with it already. You can define a bunch of factors, you can run all the algorithms that we looked at last week. But, you know, what is- what do these factors mean and how do you come up with them? Intuitively, you can define these factors just, you know, hack on a 2 if you like it, 1 if you don't like it. But, you know, philosophically, maybe you should be a little bit bothered by this because, um, these factors are kind of just arbitrary in some sense. So the goal of this, uh, lecture and, uh, um, next two will be to, um, give more meaning to the factors, and we're gonna talk about Bayesian networks. There's a way to do that. So in one sentence, Bayesian networks are factor graphs plus probability. Um, just take it, taking a step back. Where have we been in this course? This course has been a lot about designing new modeling frameworks. So we, uh, looked at state-based models which result in search problems and MDPs in games. And this was, uh, useful tools for solving a lot of, uh, problems already. Um, but then we looked at, uh, starting last week, cases where maybe the order of actions doesn't matter so much, and it's more natural to think about a set of variables that you want to find some assignment in any order, uh, uh, is, you know, permitted. Um, and you can think about that as going maybe stepping up in abstraction, kind of going from assembly to maybe C++. And in this lecture, we're gonna talk about Bayesian networks. You can think about loosely analog, analogizing going from C++ to Python. It gives you, uh, a kind of a more high level language to think about modeling, um, it's just another tool in your, you know, toolkit. Okay. So let's start with, uh, the basics. So just a quick review of probability. Usually, we see probability sort of with outcome spaces. I'm gonna jump directly to random variables, assuming that you have, uh, basic, um, um, CS109 knowledge. So random variables are things- in this example, are sunshine and rain. So they're variables whose values are unknown. And furthermore, there is a probability distribution over all the random variables that captures how they might interact. And, um, so this is called a joint distribution. Um, so we write P, uh, this blackboard P of, uh, the two random variables, S and R. And this is this entire table which specify for every possible assignment to all the variables, a single number which is its probability. So the probability that it's sunny and it's not rainy is 0.7, for example. Now, so I want to distinguish, uh, um, two things. One is that we're gonna use uppercase letters to denote random variables, and lower case letters to denote the values that the random variables can take. In addition, I wanna point out that when I write P, S equals S R equals R, that quantity expression represents a single number which is a probability, for example, 0.7. Whereas, if I write P of S and R, that expression denotes a whole distribution which is the table. And I know these are kind of minor, uh, notation differences but I think it will, uh, avoid a lot of confusion if you kind of pay attention to this. So from the do- joint distribution, you can use the laws of probability to derive, uh, several quantities. One quantity is called the marginal distribution. And in marginal distribution, you pick a subset of the variables that you care about; those are called the query variables, and you induce a distribution over them. So in this case, I've picked S. And what I'm saying is I only care about the probability of S. Um, I don't care about R. But R still has kind of influence on S. So I need to take R in to account somehow. And the way I do this is I look at all possible values that S can take on, so look at 0. And then I look over to the joint distribution [NOISE] and look at all the rows that match that particular, uh, S. So here, I'm looking at S equals 0. So that's the first two rows. And then I look at those probabilities and I summed them up. So 0.2 plus 0.08 is 0.28. And similarly, for S equals 1, I look at all the rows that matched S equals 1 which is the last two rows, and that gives me 0.72. Okay. So what I'm doing here is called marginally- marginalizing out R. Because I don't care about R, I'm interested in the marginal distribution over S. So another concept which is gonna be really important is, uh, the conditional distribution. And the conditional distribution arises when your interests- when you have, um, some evidence. So assume- let's say I observe that it's raining. So R equals 1. So I write P of S given R equals 1, to say this is the- I'm interested in distribution over S, given that it's, uh, raining. And to compute this, um, I look at this condition R equals 1, and I simply select all the rows which match that. So the second and the fourth rows. So now these are numbers, now probabilities. They don't sum to 1, right? Because it's only a subset of the rows. But what I'm gonna do is make them sum to 1 by normalizing. So normalizing means taking, uh, the relevant numbers 0.08, 0.02, adding them up, and dividing by that number. Okay? So I'm dividing by 0.1, which gives me the normalized distribution 0.8 and 0.2. Okay? So these two concepts are gonna be really important, and if you remember from last week, uh, there- we talked about marginalization as conditioning, later in this lecture I'll connect these, uh, two concepts. Okay. Any questions about, uh, basic probability so far? Hopefully this is all uh, review. Okay. Let's move on. So suppose I have a joint distribution over some set of variables. So then in this example, it's, um, sunny, it's raining, whether there's traffic, and whether it's the autumn season. Um, the way to think about this is a p- as a probabilistic, uh, database. Um, for every possible assignment, I have a number that is either, uh, is- is between 0 and 1. Um, so I can think of it as an oracle. This is like a source of, you know, truth. I don't know what any of these variables is, but I know how they behave and how they operate, just like I know- I don't know what the outcome of a coin flip is gonna be, but I know that it's half and half, heads and tails. So the main thing that we're gonna do with a joint distribution is called perform probabilistic inference. Okay? So this is an important thing to, you know, understand, um, because we're gonna spend the whole time during probabilistic inference, so it's good to know what it is. So, um, probabilistic inference, the way to think about it is that, um, you observe some evidence. You wake up and you see, uh, okay, it's- it's autumn, and, um, and it's a bay area so there's traffic outside. So, uh, you're conditioning on some evidence, T equals 1 and A equals 1. Okay. That's what you know. And what you like to find out, um, querying this oracle is, you know, whether it's raining. So you're interested in some set of query variables. Okay. So the general form of a probabilistic inference, um, a problem or task is probability of some set of query variables conditioned on some set of, you know, conditioning variables which are set to particular values. And notice that there are some variables which are not mentioned in this query, such as S, and those variables are the ones that are marginalized out. So you can think about this query as combining both the marginalization and the conditioning from the previous slide. Okay? So this, without loss of generality, just captures everything that we seek to do with, uh, distribution for the purposes of this class. Okay. So at this point, you can actually just do probabilistic inference, right? If I give you a joint distribution, um, which is this huge table with all the, uh, probabilities for all the assignments, you can go and um, you can compute anything you want. So now, there's a, kind of, a slight problem here which is that, if you have N variables, and just suppose each variable takes on two values. How many possible- how many rows in a table are there? Anyone? 2 to the N. Right? So that's exponential, that's a lot. So if N is 100, then that's, I don't know, a lot. Um, so- so clearly, we can't do this naively, right? So the first challenge is, how do you even write down this joint distribution compactly, right? I don't want to write down 2 to the N numbers. So Bayesian networks is going to allow us to define joint distribution using the language of factor graphs. So this is really cool because now I have a very compact way of specifying what is, um, implicitly, something that's very, very large. The second challenge is algorithmic. How do you do inference, right? We wanna do- perform a probabilistic inference answering queries like this. How do we do this efficiently? Again, you don't want to have to, uh, go through to 2 to the N possibilities, because that would be really really slow. Um, and we'll see that variable elimination, Gibbs sampling, particle filtering, which is the probabilistic analog of beam search, all these algorithms that we, uh, talked about last week are actually going to come into play. And we're just gonna talk about the probabilistic analog of these, as opposed to finding the maximum weight assignment. Okay. All right. So now let's try to motivate why we need, uh, Bayesian networks with this following example. So, um, here's a setting. So earthqua- earthquakes and burglaries are things in the world, they are bad things. Um, but suppose that they're independent, right? That kinda makes sense. Um, but in your house you've ins- installed an alarm system, which is going to detect either, uh, both earthquakes and alarms. Okay. So one day you wake up, and you hear an alarm go off. Okay. So you should be alarmed. Um, but, uh, but then you turn on the radio and you hear that, uh, there's actually an earthquake. Um, so how does that affect your beliefs about whether there was a burglary or not? Okay. So okay. There's three options, does it increase the probability of a burglary? Does it decrease the probability of burglary or it does not change anything at all? Okay. So how many of you think that hearing, uh, the news about the earthquake on the radio increases the probability of a burglary? So a few say it increases, how many of you say it decreases? So many of you say it decreases. How many are saying it doesn't change? Okay. Almost as many say it doesn't change. Okay. That's interesting. So we'll answer this question, but you know, keep on thinking about that in your back of your head. And one thing I'll say is that, you know, I shouldn't- you shouldn't expect to necessarily find the right answer here just by kind of intuiting things. And one of the points of making things codified in a Bayesian network is that you don't leave anything up to a, kind of, vagueness. It's- you- it's- there's actually a correct answer that we can derive. Okay. So, um, let me talk about how to go about, uh, modeling this as a Bayesian network. So with this core example, so there's four steps. Um, the first step is defining what the variables are. Okay. Variables. Um, so what are the variables here? Yeah. Burglary. Okay. So there is a burglary- Earthquake. Earthquake. And alarm. And alarm. Okay, great. So these are the three things that we don't know about that are mentioned. Okay. So the second step is, um, you draw some edges. Okay. So these are gonna be directed edges, that correspond to notions of influence. Um, and if you- if you want cause- causality. But causality is a very, uh, more philosophical thing which we don't really need for this class. Um, so, but I'll- but I'll use it anyway. So what causes what? So this alarm cause burg- burglaries, no. Okay. I think it's the other way around, right? So burglaries cause alarm, and similarly earthquakes cause alarm. Um, and these two aren't, uh, or I said they're independent, so let's just leave that out. Okay. Okay. So now I have a direct a- acyclic graph that shows how all the variables are related in somehow. Okay. So the third step is to define local conditional distributions. So now I'm going to go one step further and say, um, how these, uh, what the probabilities of these, uh, variables are. Because in the end, and remember I wanted to define a joint distribution of all the variables. Okay. So um, I'm going to define a local conditional distribution for each of these variables. So here I have P of B, P of E, and um, P of, uh, A given B and E. So in general, a local conditional distribution is P of whatever that variable is, given its parents. So the parents are the variables that directly point into it. So the parents of A are B and E. E has no parents, and A, uh, B has no parents. Okay. So in particular, what I'm going to do is now- let me flesh this out a little bit more. So what is P of B? P of B is a table that specifies only what's going on in this region of the space. So I have B, and have P of B, and I just fill out this. What are the possible values of B? 0, 1. 0, 1. So let's say, uh, 0- 1 and 0. So let's say that probably of burglary is Epsilon. Um, Epsilon generally denotes a small number, which you hope to be the case, um, here. Um, so this must be 1 minus Epsilon, because it has to sum to 1. Um, and for simplicity, let's say that probability of earthquake is also Epsilon and 1 minus Epsilon, just for simplicity. And then, okay. So this one's a little bit more complicated. So I'm gonna write the parents B, E, and the variable itself, A. And I'm going to look at probability of A given B and E. And now I'm gonna list out all the eight possible, uh, combinations here. So it's 0 0 0, 0 0 1, uh, 0 1 0, 0 1 1, 1 0 0, 1 0 1, 1 1 0, 1 1 1. Okay? Okay. So for each of these I need to specify the probability. So 0 0 0. Um, and I should say that this alarm system you bought was, uh, is really good- really good. So it's, um, it detects earthquakes and burglaries, uh, perfectly. Okay. So if there is no burglary and no earthquake, then the probability of the alarm not going off should be 1. Right? It's perfect. And this is, uh, the failure case which is 0, because, um, if there is a burglary- no burglary and no earthquake, the alarm shouldn't be going off. And, um, this is- I'm not gonna bother you with the details, you can, uh, you can just fill in the rest of this. So there is a burglary and earthquake that should be, uh, maybe someone should check them during this, right? Um, this should be a 1, this should be a 0, and this should be a 1. Something like that? Okay? So now I've defined the local conditional distributions so remember I'm not defining the joint distribution yet. I'm just defining in from zooming in on a particular variable. How does it relate given its parents, right? And you can think about it like, you have a million nodes. I'm only, each local distribution might be only touching like a very small part. Okay, so finally the fourth step is to define the joint distribution. Okay, this is the thing we're all after, right? Which is, what is the joint distribution over all three variables here and the joint distribution is going to be written with a black pen. P is, um, B equals, uh, b, um, E equals e, A equals a so random variables equals a particular possible value, and this is defined to be the product of all the, uh, local conditional distributions. Okay? So P of b, p of e and p of a given b and e. [NOISE] Okay? So let me reveal the slide which hopefully should have the same content on this. Um, one thing I'll, I'll point out is that, um, there is a difference between the small p's and the big P's. So the small p's are local conditional distributions. Um, these are things that you just define, right? There's no right or wrong there. You just define them. They're just true. Um, and then there's this big P which is, um, the joint distribution which is again defined to be just the product and then from this joint distribution, you're going to read out things like marginals and conditionals, um, which might look like some of these local distributions but they're, uh, right now I think about them as distinct objects. Yeah, question? Can we find [inaudible] So the question is are we assuming b and e are, um, independent here? Um, so let's see how do I answer that? So yes in this one b and e are, um, independent. Um, and, uh, I'll show you a little bit further how we can kind of see them more clearly. Yeah. Okay so these are Bayesian networks. So what's the connection between this and factor graphs? Well, if you, um, squint a little bit, you see that the right hand side here is a product of things and the left-hand side is this kind of joint, uh, global thing and so what does this look like? Looks like weight equals product of vectors, right? So let's go with that analogy and it's actually much deeper than just an analogy, um, and let's draw this as an equivalent factor graph. Okay? So for every Bayesian network, we can actually draw it as a factor graph. So here we have b, um, e and a and- okay so now it's, um, you know, it's really important to note that how did the factors, uh, arise. Through there's a local conditional distribution remember for every variable and that is a factor. So for every variable, there is a factor right. It's tempting to look at these edges and draw factors on them but that's, that's wrong. Okay? Remember, one factor per variable. Okay? So this variable has a factor. That is P of B. This variable has a factor, that's P of E and this variable has a factor and, uh, this-what does this depend on, what is its, uh, scope? B, and E, and A, right? Okay. Now again, common mistake is to just put two factors here because it's really tempting. But one way to think about it is that if you think about your, your parents they- they're married and connected. So that's why these are your parents are connected. Actually the- um, I'm not making this up but there's, um, some people call, uh, this process, um moralization. Um, yeah. Can you guys use this system to compute the probability of alarm given just an earthquake or a probability of alarm given just a burglary. Yeah so the question is, can you use this to compute probability of alarm given earthquake alone or burglary alone? And the answer is you compute whatever you want and we'll- I'll show you how to do that. Okay. So single factor connects all the parents, one factor per variable, okay? Got it? All right, so, um, the joint distribution over all the variables, remember is the product of all the local conditional distributions and just for reference, this is what it is. Um, and now you can go and answer questions about this. So this is kind of the fun part and I'm not going to go through the details of how this is done but I'm just gonna show you kind of the interface, um, what you would expect. So again, this is, um, the definition of an alarm network, um, using the same machinery as a factor graph because it is a factor graph, um, and first we're gonna ask what is the probability of B? So what is that? That says in the absence of any information, is there a burglary or not? Okay? So what do you think that should be? And epsilon here is 0.05. So I think I heard it 0.05. Someone said that, okay? So in D, the probability of a burglary is 0.05. Um, should be intuitive, um, and now suppose I- uh, the alarm went off. Okay? So now what's the probability of burglary? So what is P of B given A equals 1? Does it go up or down? Should go up if your alarm is working. Um, and indeed we see that probability of burglary given alarm equals 1 is 0.51. Okay? And now the moment of truth, what happens if we condition on the fact that there's also an earthquake? So let's do this and you get 0.05. So many of you are correct, um when you said that the probability of, uh, earthquake goes down. And intuitively, you can think of, uh, it makes sense from this phenomenon called explaining away. So explaining away happens when you have structures that look like this, and you have- suppose you have two causes, positive influencing effect? So by positive influence I mean that if you flip B equals from 0 to 1, then the probability of A goes up. And, um- so explaining away says I conditioned on the fact, conditioning on one cause reduces the probability of the other one. Okay? So at some level this makes sense because, you know, this A is either B or, uh, uh, is either driven by B or E and I don't know which one it is if I just heard an alarm go off. But each of these is very small proba- has very small probability. So the moment I can kind of, uh, see that one of them explained this cause, see that one of them is true then I, I can revert back to the my prior belief on the, you know, other one. Okay? So humans do this all the time when you're reasoning. When you're thinking about like what, what the cause is and you find one, one cause and you discount all the other ones. So um, now the thing that's kind of interesting here is that I did say that B and E are independent which is also true. Right? So this might have led people to think like well, it shouldn't change because they are independent. So why should the probability change? But the key thing is that when you condition on A, you actually changed, uh, the independent structure of the model. So this is why writing things down really precisely is helpful to kind of reconcile these seemingly, um, contradictory intuitions that you might get. [NOISE] Okay, any questions about this? [NOISE] All right, let's move on. So we've talked about the alarm network. This is your first example of a small Bayesian network. Hopefully, you have an idea of the intuition behind this and now I'm going to generalize it. And the generalization shouldn't be surprising. So in general I have n random variables usually denoted X_1 through X_n. And the Bayesian networks is a directed acyclic graph over these variables and it defines a joint distribution over all the variables like this, X_1 through X_n. And this is defined as a product of local conditional distributions, one for each node. Okay, so this is a product of all n, X_i given X parents of i. And this notation just means the values assigned to the parents of i. Okay, so this is a very general framework. Um, and, uh, just like factor graphs are a very, you know, general framework. But the key difference from factor graphs is the fact that these factors aren't arbitrary, right, there are local conditional distributions. And what does that mean? That means all factors satisfy this property. So if you pick up a factor for the i-th node, p of X_i given X parents is equal to 1 if you sum over all the possible values that X_i can take on. Okay, that's what it means to be a, uh, probability distribution. And this is true for every setting of experiments. So this property has two implications which I'll discuss consistency of sub-Bayesian networks and consistency conditional distributions. Um, and these properties are going to allow us to really, uh, take advantage of the probabilistic structure when we're doing inference. Okay, so the first thing is the question is suppose I have this Bayesian network, this alarm network. And, um, I'm going to suppose I'm interested in the marginal distribution of only B and E. Okay, I don't care about A. So remember this is, um, the joint distribution and by laws of probability, I can derive the, um, marginal distribution. Now, the question is what does this marginal distribution have to do with the- the Bayesian network, the graph here? Okay, so let's go through some algebra to find out. So this is a sum over all A and by definition this is just the product of all the local conditional distributions as we just discussed. And now I notice that P of B and P of E don't depend on A which means that I can pull this out and push the summation in. That's just, uh, algebraic manipulation. And then what is this value? This value is just 1 because of the previous slide. So I can just drop it. And now I have p of b times p of e. And lo and behold what is- what is this? This is if you had just gone and defined a sub- miniature of Bayesian network over B and E. This was exactly what you've written down. Okay, so that's kind of cool. So the general idea here is that when you're marginalizing out, uh, a leaf node that yields a Bayesian network just without that node. So marginalization produces this, um, this Bayesian network where you've just erased, um, the very- the leaf node along with its incoming edges. All right, so in other words, I've turned basically what was, would have been a algebraic operation into a graphical one. And generally those are good moves because it's much easier to kinda think graphically and, uh, make large operations then go through tons of algebra. Yeah. Definition equals, it seems like it's from like the probability [inaudible]. Yeah, so the question is what about this first definition equals? What I mean here is by the laws of probability. Um, so it's not technically a definition, it follows from the axioms of probability. Yeah, thanks. Okay. So notice that in this world, P- B and E are independent. So this is one way you can kind of, uh, see that actually when you define the joint distribution, in that joint distribution, um, two variables, B and E are independent. So one thing to note is that if we looked at the factor graph, um, which is this thing. And remember last time we talked about marginalization in fact- factor graphs. And what does that look like? If you- what happens if you did marginalization in this factor graph? Okay. Yeah, you just remove A but this factor is, does it disappear? No, it's- it doesn't, right, because factor graphs, remember that factor graphs don't know anything about this factor. Other than that it, uh, returns non-negative numbers. So you would have to keep, hold onto this factor. Right, so the moral of the story here is that if you're using factor graph, if you convert the factor graphs too early, then you might lose out on opportunities that really simplify it. Whereas, if you look at this- the factor graph of this one, there is no P of a given B and E. Right. I mean just to go back here factor graphs will create a factor which is summation of A, P of A given B and E and call that a factor. And we know because these are local conditional distributions that's just one, so you can just drop it. Okay, so- so that's the first property. Just summarize, if you marginalize out leaf nodes, uh, you get Bayesian networks by just dropping them from the graph. So the second property is its consistency of local conditionals. As I alluded to before, if you have P probability of D given A and B, there's two versions of this that you might be thinking about. One is the local conditional distribution, which has again you just define it as such. And then there is the corresponding quantity that comes about from probabilistic inference. So this quantity is derived from taking the definitions, forming the joint distribution, and then using the laws of probability to derive this particular quantity. And this property says that don't worry about it, the two are equal. So, you know, it means that you can kind of intuitively think about there just can be one notion of probability in your head. But I wanna make this explicit but that this is- that doesn't come necessarily for free, you have to kind of verify that this is true. I'm not gonna go through the verification step, it's in the, uh, notes in the slides, but I'll just state it as such. Okay. So let's do another example just to familiari- familiarize ourselves with Bayesian networks a little bit more. Um, so the question here is that suppose you have, um, you wake up and you are coughing and you have itchy eyes and you're wondering, do I have a cold or do l have allergies? Okay, so let's follow this four-step procedure to define this Bayesian network. Okay, so step 1, what are the variables here? There's, um, coughing, let's denote that as H, and itchy eyes, and then cold, and allergies. Okay, so four random variables. Um, how should I connect these things up? Yeah, so H and I should be connected to C. So if you have a cold, you probably have, um, uh, a cough and you probably have itchy eyes. And here you tap into your medical knowledge and, um, what was that? Yeah, so generally, uh, I'm no doctor but let's just assume for now that allergies don't really cause the coughing, cause the itchy eyes. It's probably not true but let's just pretend it is. Um, okay, so just to make the network a little bit more interesting. Okay, so those are the edges, and now I have to specify local conditional distributions over all these. So what are the local conditional distributions? So I have P of C, P of A, remember one for every node, um, and P of H given C and here over here is P of i given C and n, right. So probability of a node given its parents. And then finally I have the joint distribution which is probability of C, A, H, I. And this is by definition just a product of everything. Um, for this example I'm not going to go through and define the actual tables because that's gonna take too much time. But I'm gonna do it in this demo here. Okay, so this is a Bayesian network that I just drew on the board and this is its- its associated factor graph. Remember one factor per node. Yeah. The PowerPoint switches the allergies and cold. C, A, oh, yeah, you're right. Um. I guess that makes sense. Which one makes sense? [inaudible] Yeah. Okay, I got, I got a little bit, uh, confused [NOISE] Okay. So it should be like this and then I have to adjust things, um, okay. So we're fixing this. I given a, um, and c and a, okay? Just for the record, I'll just make this h given c and a and i given a, okay. That wasn't too bad. Okay, thanks for catching that. Okay. So this is the factor graph, um, and let me show you, uh, this demo. So you can click on this and you can see, uh, this Bayesian network and this factor graph. Um, and to answer this question, what was the question? The question was if I have, uh, if you are coughing and have itchy eyes or do you have cold or allergies. So I conditioned on cough equals 1, uh, itchy eye equals 1, and I am asking for, uh, the probability of, um, the cold. Okay. And if you work it out, you see that the probability of a cold is 0.13. Um, and, you know, so why does this- so- okay, I guess I didn't really tell you enough about the actual prior probability. So the probability of a cold is, you know, 0.1, um, let's say and the probability of allergies is, you know, 0.2. And then there's a, kind of, a noisy or where if you're, uh, if you have, um, a cold or allergies then you, you end up coughing. And, um, the- if you have, uh, itch, allergies and you have itchy eyes with probability 0.9. Um, and what happened here is that, um, if you- oops. Um, if you condition on, uh, your coughing and you have itchy eyes, um, there's this, kind of, interesting explaining way happening here. Um, where, you know, even though you didn't observe A, you observe evidence of A, and that's enough to, kind of, uh, lower the probability that you have a cold. So this is- example show something a little bit more subtle how information can kinda propagate along the Bayesian network in ways that if you try to do it just kind of intuitively, you will probably, um, not be able to. Okay. So let me summarize so far what we've done. So we've introduced Bayesian networks, where we have random variables that capture the state of the world. And we have edges between those variables that represent dependencies between, um, those variables. And, um, based on those dependencies, we go and define local conditional distributions, you multiply all those local conditional distributions, you get a joint distribution. Now, with that joint distribution, by laws of probability you can go and ask probabilistic inference queries and ask questions about the world, um, given evidence. And we saw that this captures interesting reasoning patterns such as explaining a way. And finally, all of this can be, uh, brought under the umbrella of the factor graph interpretation which we will see is very useful for, um, actually doing probabilistic inference in general, in a bit. Okay. So any questions before I move on to the next section? Okay. So now, I'm gonna talk about probabilistic programs. So this is going to be, um, kind of, a little bit of a whirlwind tour and hopefully give you different perspectives, um, and open your eyes to, kind of, the possibilities of Bayesian networks. Um, so let's look at this alarm network again. I can write it as on the board, um, just a product of all the local conditional probabilities, basically use math or I can think about this as a probabilistic program. Okay. So what I'm gonna write down is a program that it's a very simple program, um, it has three lines, one for every, uh, variable. And the first line is B is, uh, drawn from Bernoulli Epsilon. So this notation just means B is set to, uh, a random value that has, uh, distribution Bernoulli Epsilon, and same with, uh, earthquake. And then finally I set A equals B or E. Okay. Um, and, uh, so the idea here is that a probabilistic program is just simply a program with randomness in it that when you run, sets the random variables. So this is I, I think a really useful way to think about, um, Bayesian networks. And just to be very concrete about this, so you can think about Bernoulli of Epsilon as just a Python program that just returns, uh, true with a probability Epsilon. So here, random less than Epsilon, the random is a number between 0 and 1, has a probability of Epsilon being less than Epsilon. Okay. Any questions about the- what this is doing? Yeah. Why does the randomness help rather than [inaudible] So the question is why does randomness help? Um, the, the reason is that I want this program to be, put a distribution over possible assignments. Every time I run the program, it's gonna produce a different assignment. And the distribution over that assignment is the distribution that I'm defining. So, so, so this is a kind of an interesting philosophical point. So normally you run programs and wr- write programs with the intention of running them and do, do something useful. But here the re- programs are just a kind of artif- artifact to define a distribution. Uh, hopefully this will become a little bit clearer as I go through more examples. Yeah. Uh, if you want to define some distribution, can you just, uh, can you find like hard-code the table instead of doing this. I mean you can just hard-code Epsilon into your table instead of like maybe like writing this program [inaudible] Epsilon? Yeah. So the question is why don't you just, uh, hard-code- define a table directly, um, instead of running this program? So the intention here again is not to run this program because it's not an efficient way to do a probabilistic inference, but it's more of a, a metaphor or a tool to help you get more intuition about, um, probabilistic, uh, programs in Bayesian networks. So hopefully, we, we can, uh, come back to this question after I go through a few more examples. So here's a more interesting probabilistic program. So suppose you're doing object, um, tracking and you define a program which starts with X_0 equals 0, 0. So the initial location is at the origin, and then for every time-step, um, so I'm writing the program in kind of pseudocode here. Um, with probability Alpha, i set X_i equals X_i minus 1 plus 1, 0, so I'm going to the right, and with probability 1 minus Alpha I'm going down. Okay. So, um, now this program, you know, that I just described, um, it induces a particular Bayesian network structure where each x_i is only connected to x_i minus 1. Okay. So what I'm trying to ge- get you to think about is there's multiple ways of thinking about the same object. And I think when you get- when you can kind of internalize all these things, you kind of get a deeper understanding of what you're dealing with, right? We have the probabilistic, uh, view, uh, viewpoint. You can look at the tables, you have, you know, your equations, you have this graph and now I'm giving you an additional tool, uh, the programs. Okay. So just for fun, um, you can actually run this program. Again, this is not what you would do normally but, um, I can run the program in any case. So every time I hit Enter, um, this gives you a different trajectory. So this is a way to visualize the distribution over proba- uh, X_1 through X, um, whatever how many of- many, uh, red squares are. Okay. And if I change Alpha, that gives me distributions which are either skewed to one side or the other side. Um, so that's the distribution over, uh, um, programs, oh sorry, distribution over assignments. Okay. So what does probabilistic inference look like in this setting? So remember, what is probabilistic inference? I'm conditioning on some piece of evidence and I'm asking for the distribution over some other set of variables. So in case, in this case I'm conditioning on the fact that I spotted X of the object at 8 ,2 at time step x_10, that's it. And I'm interested where it could have been before that. So, um, here what I'm gonna do is I'm gonna run the former program and I'm only gonna keep those trajectories and show it if X_10 equals 8,2. So if I do that, I'm gonna- so this is 8 ,2. Um, I'm seeing that the set of, uh, possible trajectories look like this. So this is the distribution over, um, trajectories given X_10 equals 8, 2. Okay. So it's important- what I'm trying to get you to think about is that Bayesian network or probabilistic program as, what is the distribution? You can visualize the distributions by looking at samples from that distribution. It's another way to think about it. Right, because distributions are, um, think about like, ah, suppose you have a dish- I tell you I have a distribution over images, and how do you actually get a hold of that or understand that? Well, probably the easier- easiest way is to draw samples from it and look at kind of the types of images that you get. Question? Is this the way to specify a distribution, are they like- is this a way to specify distribution [inaudible]. Uh, so question is, is this way a way of specifying a joint distribution? By this I mean- I guess you mean the- ah, so probabilistic programming in general. It is so- so for every probabilistic pro-, um, program, it specifies a joint distribution over the random variables that you set in that program. And vice versa. If I have a Bayesian network, I can write down a probabilistic program. Um, one thing as you'll hopefully become clear is that, the reason to think about in terms of programs is that you can inherit all the nice properties of programs like, the ability to define functions, or even have recursion, or, you know, you could do a lot more, um, fancy stuff with programs that you can't do what- I mean which will be hard to do. You can think about Bayesian networks as another way to think about is like, okay you're basically writing assembly code right, for every, ah, um, variable you specify its value, but if you have a million value- variables, sometimes it's useful to be able to structure, um, your- your code in some way. We- we'll see that over the next few examples. Okay. So this is going to be a march of I think around seven possible, um or so possible examples, and I just wanna give you a flavor of types of probabilistic programs that we're talking about here. So the first one is called, ah, just a Markov and- by- whenever I say probabilistic program think Bayesian networks or, um, generalizations of that. So Markov model, um, so this has a lot of applications in, um, you know, modeling language or time series. And, ah, the program works as follows, for every position i through n, I'm gonna generate a particular, ah, word X_i, given the previous word. Okay. So this is also happens to be the same type of program as for the object tracking. Okay? So this is this Bayesian network structure. Um, so here's another one. This is called the Hidden Markov model which, um, is, ah, was a very popular, ah, model that was, um, used, ah, for all sorts of things like speech recognition notably before, um, the rise of deep learning. Ah, so the idea here is that for every time step T equals 1 to T. I'm gonna generate an object with location HT given the previous HT minus 1. So this part is, just looks like a Markov model. Okay. But the- the reason why it's called a Hidden Markov Model is that I'm not actually gonna observe HT, I'm gonna observe sensor readings ET at each time step T given, ah, the hidden location. Okay. So this is what a hidden Markov model looks like. Sequence of object locations [NOISE] which I don't observe and sensor readings which I do observe, which depend respectively on the given object, ah, locations. And just as a convention, whenever I shade a variable, that means I, you know, observe it and if it's not shaded, that means I don't observe it. Okay. So this program defines a joint distribution over all of these variables. And now you can ask a particular question, you can do probabilistic inference. And the most, um, common thing that people do here is, given the sensor readings, where is this object? Which is something we've already been exposed to through the lens of factor graphs but this is again a way to think about it, um, through the lens of, um, vision algorithms. So now, ah, with this kind of programming metaphor, you can actually do, ah, kinda more complicated things in a very kind of succinct way. So to describe, uh, multiple object tracking, you can think about, ah, there being two objects A and B, and each position, um, at each time step and every object I'm gonna generate a location for the object, and this is going to be two independent, um, Markov chains which are running, but the thing is that, at each time step, I only observe one sensor reading and that sensor reading is going to be some combination- some function of the actual locations of objects at that particular time step. Okay. So now hopefully you can see a little bit of the vantage of thinking in terms of a program because I can write this kind of very simple four-line program that, um, very precisely nails down what the actual, you know, model is. Um, and in particular this factorial HM, as it's called, is something that you're gonna be exploring in your- the car assignment. Um, here's another example, so this is, ah, for- usually used for cla- classification. It's called naive Bayes. Some of you might have heard of it. Um, and the program looks like this. You first generate a label Y. Um, let's suppose you generate travel. And now you're gonna- for every word in your, ah, document, you're gonna generate a word, um, given that label. So if you generate travel, you may generate words like beach in Paris. Um, so now the- that again specifies a distribution over all the variables, what are you typically interested in? If you're interested in classification, you're given the words and now you wanna go back and, uh, figure out what the, the-, um, the class is. You're given a text document, what is, ah, the label? Um, here's a fancier, ah, model of documents called Latent Dirichlet allocation. Um, so here instead of how to generate a single topic, I'm gonna generate a distribution over topics. This is getting a little bit meta because this random variable in itself is actually a distribution but, you know, let's not worry too much about that. So this- this is a distribution, um, and for every position I'm going to first generate a topic like travel or Europe. And then for that, ah, topic I'm gonna generate a word given that topic. Okay. So this allows you to model documents which talk about multiple things, for example, traveling Europe. Okay. So this is also a very popular model that can be used to, if you're given a collection of documents trying to understand, understand the, ah, latent structure inside it. Um, [NOISE] so here's one that's kinda a generalization of the, uh, the medical diagnostics, um, um, uh, example on the board. So in general let's say you have a bunch of diseases. Um, you generate the activity of a particular dis- disease in a patient, um, according to some, you know, prior distribution. And now you- for every symptom that, um, you might obs- observe or any sort of lab test you have the probability of some outcome of that symptom given the diseases. And of course, the probabilistic inference question here is, if the patient has particular symptoms, what kind of diseases, ah, does or problems does he or she have? Okay? So I think this is f- maybe the final example. Um, here is a social network analysis example where, um, you have, um, a set of people, each person has ah, you know, qua- a type, maybe a politician or a scientist. Um, and these- for every pair of people, ah, they can either interact or not interact. They might be connected or not connected, let's say in a social network. And so in the end what you're given is a social network of, ah, connectivity and you're asked, what kind of types of people are there? So generally, you, you observe maybe some graph, and you want to understand, um, you know, what kind of features or, ah, you know, what is- what is a concrete way of summarizing the types of people there are. And there's- this called a stochastic block model but there's other kinda fancier models that are based on a similar idea. So that was a very quick, um, you know, overview of different types of probabilistic programs or Bayesian networks. And there- the point is that there are many many different types of models that can be written down in the literature. Many things, generative models can be just written down in a probabilistic program or equivalently a Bayesian network. Um, and all of them kind of have this kinda basic structure. If you observe carefully, all of them kind of look like that whereas there's some set of variables H, um, which you don't observe, and that generates or causes, um, a set of variables E, which you do observe. So the mindset when you're designing Bayesian networks is, you're coming up with stories of how the data which you- what you observed was generated through the quantities of interest, the output. So this is probably kind of maybe counter-intuitive and for those of you who are really used to thinking about just normal classification where you- it's the opposite. You start with the input and you think about, what are things to do to the input that it can, ah, you know, what kind of things can I do to get it to a point where I can, you know, classify the input precisely? But Bayesian networks kind of go the opposite. Um, it starts with the output or the structure is interested in which are presumably kinda more- kinda a platonic idea or something cleaner, and then you're trying to describe how that clean data gets- gives rise to this kinda messy-sorry, the clean structure give rise to the messy data that you observe. Question? Can you explain again why it's called the output? Right. So- why is, uh, this called the output? Um, so I'm using input output here in- to borrow terminology for when we talked about classification, where you're going from input to output. Input is what you are given and output is what you're outputting, I guess, producing. Right? And in the, the Bayesian network, you first define the model, kind of going from output to input. So kind of the opposite of what you would normally do. And now, now there's a second stage, where you do probablistic inference, which reverses that. And you go from the observations, which are the input to the output which is right. [NOISE] Okay? Any other questions about this? [NOISE] All right. So now let's talk about inference. Um, this is also gonna be the topic of next lecture but I'm just gonna start, [NOISE] um, [NOISE] playing around with this a little bit. So remember what is probabilistic inference? We're given a Bayesian network, define some joint [NOISE] distribution. We're also given some setting of the variables, which are the evidence, for example I saw that the alarm went off, um, and [NOISE] I'm interested in a subset of the variables. [NOISE] Okay. So what I'm trying to ch- produce is, a probability of some query variables conditioned on evidence and what this really means, is I want this for all values [NOISE] of, um, the query variables. Okay. So for example, if I have coughing, I have itchy eyes or I have a cold. It's an example of a probabilistic inference query. [NOISE] Okay. So let's start with this simple example. Suppose I have this Markov model and I asked this query, what is the probability of X_3, [NOISE] given X_2 equals 5? So condition X equals 5, I'm interested in X_3. Um, so at this point, you already have the tools to do this, um, and I'm gonna show you how you can just, uh, go through the calculations and then I'm gonna show you an easier way to do this. So if you were just shown this right now, this is probably what you would, um, do which might be a little bit tedious, uh, so by laws of probability, this, uh, this conditioning is equal to the joint over this, um, marginal case. [NOISE] This is just by definition [NOISE] of conditional probability. Um, and, uh, one thing I'm gonna do here is, um, notice that I'm only interested in distributions of X_3. So from that perspective, this blo- denominator is just a constant. [NOISE] It doesn't depend on X_3. So what I'm gonna [NOISE] write is this proportional to, which means that, the actual value here is this thing on the right-hand side times some constant which, um, I don't care about. And the reason I can do this and I don't care about is because I know that, um, the left-hand side is the distribution, so whatever I get on the right-hand [NOISE] side, if it sums to 6 or something, then I just divide by 6 and I get a distribution. Okay. So this is gonna save you a lot of work [NOISE] if you use a proportional to sign. But you have to use it carefully, otherwise you can get wrong answers. [NOISE] Okay. So let's expand this. So this is a marginal distribution of X_2 and X_3. Um, I can write it in terms of the joint, where I sum over the variables that I don't care about. So there's again, laws of probability [NOISE] um, and then the definition of the Bayesian network here, is a joint distribution is equal to the product or local conditional distributions. So right now, I have a lowercase p now because there're local distributions. Um, now I'm gonna do some algebraic manipulation. So notice that, um, this stuff doesn't depend on X_4. So I can push the summation of X_4 over here and then these two first two terms, um, [NOISE] uh, only these first two terms depend on X_1. So I can group this and to [NOISE] have the sum [NOISE] over X_1 apply here and then, I can look over here and use, what does this sum to? One. So I can drop it and then, what is this? Does this depend on X_3? Nope. So I can also drop that and I get a p of X_3 given X_2 equals 5. Right? So this hopefully shouldn't be surprising to anyone because remember that slide where I said, uh, consistency of local conditional distributions, this is- should be equal to this and this is just one way of verifying that. That's actually the case for this example. Okay? Um, so, you know this was- you can do this. I mean for this one, it's actually not that bad. Especially when you already know the answer. Um, but I promise you there are gonna be situations where you definitely don't want to grind through all the math because you can fill up 10 pages of equations. Um, I'm gonna show you kind of a faster way to do this. [NOISE] Um, and [NOISE] so let's start. [NOISE] Okay. So this is going to be a five-step, [NOISE] uh, procedure. But in many [NOISE] cases, not all the steps are necessary. Um, okay. So let me erase this. And the key idea is going to be [NOISE] to use the structure of the Bayesian network, um, and factor graphs, to simplify some of these operations. Okay? So, um, let's start with- okay, so you have X_1, um, X_2, [NOISE] X_3, X_4. Um, we just have four. Okay? All right. So- and I'm, um, [NOISE] conditioning on, um, X_2, right? Okay. So X_2, uh, that takes on value 5. [NOISE] Okay. So [NOISE] um, the f- and I'm interested in this, uh, query variable. So the first thing I want to do is, [NOISE] I want to remove as many variables as I can. Um, I just- because that's gonna simplify [NOISE] my life. So I'm going to remove or marginalize, um, non-ancestors [NOISE] of, uh, the query and the variable I'm conditioning. [NOISE] So by non-ancestors, I mean, um, anything that's upstream, I am gonna keep for now. Anything that's downstream, I can let go. [NOISE] Okay. So what can I remove here? [NOISE] X_4. X_4, right? So I can, um, let me show this. So I can graphically just remove X_4. And that corresponds to on the slide, basically the fact that this thing sums to 1. But I've done this again graphically, [NOISE] which hopefully should be, uh, more intuitive. Okay. So the second step is, I'm going to [NOISE] convert to a factor graph [NOISE] um, because, uh, one already takes care of- basically I'm exploiting the properties of Bayesian networks. But after I've done one, um, [NOISE] I don't- it's simpler to think about as a factor graph, where I want to think about [NOISE] the factors more explicitly as just arbitrary functions and not worry about which way the conditioning is going because [NOISE] it's really easy to get confused by, um, Bayesian networks, where you're wondering like, oh this is conditioning over here, wha- what's a marginal distribution and, um, factor graphs I think, by actually [NOISE] removing the directionality and some semantics, actually make things a little bit easier. Okay. So I'm gonna convert this into a factor graph, which means I have, um, let me actually just draw it down here again. [NOISE] So here's a factor graph. Um, remember I have, uh, probability of X_1, um, [NOISE] probability of X_2 given X_1, um. [NOISE] So this might look like more, ah, work [NOISE] right now, uh, because I'm making things explicit. Um, but you can actually do a lot of these things in your head if you, um, get the hang of it. Okay? So remember, every variable has, uh, is associated with a factor, um, okay. So now, I want to, um, you know, condition [NOISE] on, on the, you know, evidence. [NOISE] So I'm conditioning on x_2 equals 5. So remember what the conditioning does, remember from last week's lecture. Conditioning just removes this, and changes the factors to be set to the value that, that um, variable takes on. Yeah. Can [inaudible] x_4 in this? Sorry. Yeah. We shouldn't have x_4, good point. Okay. We still have factor on that other side or, uh- This factor should be there. So x_4 is- this is the factor graph corresponding to that. Yeah. Um, okay. So I'm conditioning on x_2, so I wipe x_2 from the face of the earth, and I'm going to set this- change this factor to be a partial evaluation of where I put our x equals to 5, and this factor is x_2 equals 5 given x_1. Okay? Um, so this connection is good. So now. I, um, can marginalize out the disconnected components. Um, and these are the components that I will- remember, I care about x_3. So this stuff is disconnected so I don't care about it. So I'm just going to, um, let's say you just cross it out. And that operation corresponds to the fact that, you know, this thing over here. I just can drop because it's, uh, not related to x_3, it's just a constant. Okay? So finally, um, the fifth step is actually, uh, [NOISE] do work. Okay? So what does that mean? Um, you might not be so likely to be left with just, you know, a single variable whether factor where, where that's just the answer. Um, in that case, you actually have to, um, actually compute, do the marginalization operations that we saw last week. In this case, we are fortunate that, um, [NOISE] this factor, this actually represents a distribution of x_3, so that is just the answer to, you know, the problem. Okay. But I'll go through some other examples where it's not as obvious. Okay? So this is just a general strategy that I outlined on the board here. And again, I think once you get kind of good at this, you can basically, the steps, um, 1 and 4 should be kind of very, um, kind of visual because you can just see, uh, well, all everything downstream just to be clear, it doesn't matter. And when you see these, um, [NOISE] these are, you know, conditioning things, you can kind of automatically just not ignore things and just jump directly to 5. So that's the idea. I am just doing things that are more explicitly on the board so you can kind of see where things are coming from. Okay. So, um, I'm gonna do another example. Um, this is the alarm. [NOISE] So, uh, here, I have this vision network, and I- let's suppose I'm interested in probability of B. Okay. So this should be an easy one. So s- start with one, marginalize out non-answers. So which are the non-answers of B? So A and E, right? So I just removed them from the face of the earth, and I'm just left with the single, uh, variable B. And obviously, it has a factor of p of B, and then I'm done. Okay? Okay. So this one's maybe a little bit more, um, you know, complicated. So this is the probability of earth, uh, sorry, burglary, we've given A equals 1. Um, so let's go through this example. [NOISE] Um, I'll try to do it quickly. All right. [NOISE] So I have, um, B, E, and A. Um, okay. So marginalize out non-ancestors. So what am I interested in? I'm interested in the probability of, ah, B given A equals 1. Okay? So I have A and B that I care about. So what are the [NOISE] non-ancestor of these variables? There's none. [NOISE] Right. So this is the non ancestor of A, so I can't remove it. So I can't do anything there, too bad. Convert to your factor graph. We've done this before. Probability of B, um, moralized the parents. [NOISE] So this is probability of A given B and E, and then this is probability of E. Okay. Um, condition on the evidence now. So I conditioned on A equals, uh, 1. So I'm gonna remove this and change this factor to A equals 1 [NOISE] given, uh, B and E. Uh, fourth step is marginalize out anything that's [NOISE] disconnected. Uh, nothing's disconnected. So I can't do anything. And last, I have to do actual work. Okay. So what does actual work mean here? I'm interested in the probability of B. So I need a marginalize out E. Now, I have to do this kind of a hard way, um, based on last time, uh, last lecture. So, um, what I'm gonna do here is, you know, what happens when I marginalize out E? I create a new factor. Let, let me actually replicate this down here, so it doesn't get too confusing. Um, so I'll create a new factor, and this new factor that's called f of b, um, which is the Markov [NOISE] negative E, there's only one other variable B. And this is going to be the product of all the factors here that touched this variable that I'm marginalizing out. And the only difference between this and what we are doing last time is before we had a max, because we're doing maximum weight assignments, [NOISE] and here, I'm going to have a sum because we're doing probabilities in marginalizing. So this is going to be, uh, a summation over E. Okay. And then the final query is going to be, uh, just the product of those two things. Okay. Um, I'm not gonna have time to actually drill down into expanding these values. Um, but if you actually, uh, plug-in Epsilons into these, um, then you'll find that the probability of B equals 1, given, um, A equals 1 is 1 over 2 minus Epsilon which is, um, remember, 0.51 is for Epsilon equals 0.05. Okay. But this calculation, um, you know, you can look into the slides to see how this is actually done but it is just algebra. Okay? Um, so the- there's another example which I'm gonna defer to section to talk about. Um, I think in all of this, you just need to do some practice [NOISE] and get kind of comfortable doing these operations. [NOISE] Um, to summarize, to find Bayesian networks, uh, there's this way of, uh, defining models that, um, allow you to specify locally and optimize globally. Once you have a Bayesian network you can do probabilistic inference where you condition on evidence and query variables of interest. And next time, we're going to focus on number 5, and hopefully not do things completely manually but do things more automatically. Okay. That's it. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Markov_Decision_Processes_1_Value_Iteration_Stanford_CS221_AI_Autumn_2019.txt | Okay. Let's start guys. Okay. So our plan for today is to catch up. So we're a little behind. So, uh, it's okay. So today, I want to talk about MDPs, Markov decision processes, and my plan is to talk about that for the first hour. And then after that, I want to talk, uh, for 10 minutes about the previous lecture. So remember, like we went over relaxations kind of quick, so maybe we can go over that again. And then, the last 10 minutes I want to talk about the project and, kind of the plan for the project, how we should think about it, this is coming up, so we should start talking about that. So this is an optimistic plan, so, [LAUGHTER] uh, let's see how it goes with, this is the current plan. Okay. All right. So. Okay, let's get into it. So Markov decision processes. So let's start with a question. Um, let's actually do this just by hand, so you don't need to go to the website. So the question is, it's Friday night and you wanna go to Mountain View and you have a bunch of options but, what, what you wanna do is you want to get to Mountain View with the least amount of time, okay? Which one of these modes of transportation would you use? Like, how many of you would bike? No one would bike. A couple of you would bike. How many of you would drive? This is, this is popular in Mountain View, would be good. Caltrainers? Some people would take Caltrain, sounds good. Uber and Lyft? We have like a good like distribution. Fly? [LAUGHTER] Yes, yeah, a good number of you want to fly, uh, as flying cars are becoming a thing, like this could be an option in the future. There are a lot of actually startups working on flying cars. Um, but, but as you think about this problem like the way you think about it is, is there a bunch of uncertainties in the world, like it's not necessarily a search problem, right. You could, you could bike and you can get a flat tire and you don't really know that right, you have to kind of take that into account. If you're driving, there could be traffic. Uh, if you are taking the Caltrain, there are all sorts of delays with the Caltrain, uh, and all sorts of other uncertainties that exist in the world and, and you need to think about those. So it's not just a pure search problem where you pick your route and then you just go with it, right, there are, there are things that can happen, uh, that can affect your decision. So, and that kind of takes us to Markov decision processes. We talked about search problems, where everything was deterministic, and now you're talking about this next class of state-based functions, which are Markov decision processes. And the idea of it is, you take actions but you might not actually end up where you expected to because there is this nature around you and there's this world around you that's going to be uncertain and do stuff that you didn't expect, okay. So, so, so far we've talked about search problems. The idea of it is you start with a state and then you take an action and you deterministically end up in a new state. If you remember the successor function, successor of S and A would always give us S prime, and we would deterministically end up in S prime. So if you have like that graph up there, if you start in S and you decide to take this action one, you're going to end up in A, like, there's no other option. But that's how you're gonna end up in it, okay. Uh, and the solution to these search problems are these paths. So we have the sequence of actions because I know if I, if I take action one, and action three, and action two, I know like what is the path that I'm going to end up at and that would be ideal, okay. So when we think about Markov decision processes, that is the setting where we have uncertainty in the world and we need to take that into account. So, so the idea of it is, you start in a state, you decide to take an action but then you can randomly end up in different states. You can randomly end up in S_1 prime or S_2 prime. And again, because there's just so many other things that are happening in the world and you need to, you need to worry about that randomness and make decisions based on that, okay. And, and this actually comes up pretty much like every run- every application. So, uh, this comes up in robotics. So for example, if you have a robot that wants to go and pick up an object, you decide on your strategy, everything is great, but like when it comes to actually moving the robot and getting the robot to do the task like the actuators can fail, or you might have all sorts of obstacles around you that you didn't think about. So there is uncertainty about the environment or uncertainty about your model like your actuators that, that you didn't necessarily think about and in reality, they are affecting your decisions and where you're ending up at. This comes up in other settings like resource allocation. So in resource allocation, maybe you're deciding what to produce, what is the product you would want to produce and, and that kind of depends on what is the customer demand and, and you might not have a good model of that and, and that's uncertain, right? It really depends on what, what products customers want and what they don't. And you might have a model but it's not gonna be like accurate and, and you need, you need to do resource allocation under those assumptions of uncertainty about the world. Um, similar thing is in agriculture. So for example, you want to decide, uh, what sort of, uh, what, what to plant but, but again, you might not be sure about the weather, if it's gonna rain or if the, if the, the crops are going to yield or not. So there's a lot of uncertainty in these decisions that we make and, and they make these problems to, to go beyond search problems and become problems where, where we have uncertainty and we need to make decisions under uncertainty. Okay? All right. So let's take another example. So this is a volcano crossing example. So, so we have an island and we're on one side of the island and what we wanna do, so we are in that black square over there. And what we wanna do is, you want to go from this black square to this side of the island and here we have the scenic view and that's gonna give us a lot of reward and happiness. So, so my goal is to go from one side of the island to the other side of the island. But the caveat here is that there's this volcano in the middle of the island that I need to actually pass, okay. So, and, and if I fall into the volcano, I'm going to get a minus 50 reward, more like minus infinity. But, but for this example like imagine you are getting a minus 50 reward if, if you fall into the volcano, okay. So. All right. So, so, if I have this link here in this side, so if my slip probability is 0 which is- I'm sure I'm not gonna fall into the volcano, should I cross the island? No or yes? Well, I should cross the island uh, because I'm not gonna fall, right, like I'm, I'm not gonna fall into that minus 50. Uh, slip probability is 0, I'll get to my 20 reward, everything will be great, okay. But the thing is like we've been talking about how the world is, is stochastic and slip probability is not gonna be 0. Maybe, maybe it's 10%. So if there's 10% chance of falling to, into the volcano, how many of you would, would still cross the island? Good number, yeah. So, um, the optimal solution is actually shown by these arrows here. And yes, the optimal solution is still to cross the island. Like your value here, we're going to talk about all these terms, but the value here is basically the value you're gonna get, uh, at the beginning like state which is the, kind of- we'll, we'll talk about it, it's the expected utility that you're gonna get. It's gonna go down because there is some probability that you're going to fall into a volcano, but still like the best thing to do is to cross the island. How about 20%? How many of you would do it with 20%? Some number of people, [LAUGHTER] it's less. Um, still turns out that the optimal strategy is to cross. 30% percent? One person. [LAUGHTER] So with 30%, that's actually the point that you kind of you'd rather not, not cross because there's this volcano and then with a large probability you could, you could fall into the volcano and the value is going to go down. Okay. So these are the types of problems we're gonna, we're gonna work with. Yes. The value like with respect to two because two is like what you can do with them. So two is like the value- the reward that you are going to get at, at that state, and then value you compute that you propagated back. We'll talk about that in details on, on how to compute the value, [NOISE] okay? [NOISE] All right. Okay. So that was just an example. So, so that was an example of a Markov Decision Process. What we wanna do in this lecture, is we are going to, like, again, model these, er, types of systems as Markov decision processes, then you are going to talk about inference type algorithms. So how do we do inference? How do we come up with this best strategy path? Um, and in the middle, I'm going to talk about policy evaluation, which is not an inference algorithm but it's kind of a step towards it. And it's basically this idea, if someone tells me this is a policy, can I evaluate how good it is? And then we'll talk about value iteration which tries to figure out what is the best policy that I can take, okay? So that's the plan for today. Then next lecture we're going to talk about reinforcement learning where we don't actually know what the reward is, and we don't know what the- where the transitions are. Uh, so, so that's kind of the learning part of- part of these, er, MDP lectures. So Rita is going to actually do the- do the lecture next, next- on, on Wednesday, right? Okay. So let's get into- let's get into Markov decision processes. So we have a bunch of examples throughout this lecture, so this is kind of another example. So all right so actually I do need volunteers for this. So in this example, uh, we have a bunch of rounds, and the idea is you can at any point in time, you can choose two actions. You can either stay or you can quit, okay? If you decide to quit, [NOISE] I'm going to give you $10, I'm, uh, actually I'm not going to give you $10, but imagine I'm gonna give you $10 [NOISE], and then we'll end the game, okay? And then if you decide to stay, then you're gonna get $4 and then I'll roll the dice. If I get one or two, we'll end the game [NOISE]. Otherwise, you're going to continue to the next round, and you can decide again, okay? So who wants to play with this? Okay. All right. Volunteer. Do you want to stay or quit? Quit. [LAUGHTER] [LAUGHTER] so that was easy. You got your $10. [LAUGHTER] Does anyone else want to play? Stay, stay again. Oh, you've got 8, $8. Sorry. [LAUGHTER]. The dice is still. Um, so you kind of get the idea here, right? So, so you have these actions and then with one of them, like if you decide to quit, you deterministically you will get your $10 and you're done. Uh, with the other one, it's, it's probabilistic and you kind of wanna see which one is better and what, what would be the best policy to take in this setting. So we'll come back to this question. We will formalize this, and, and we'll go over this. I have a question. Is like, I think I see a similar example. Is it better to always, like, just continue once and then quit? Like, isn't it better to switch or? So when, when not. Okay so, so then you need to actually compute what is the- Yeah. -expected utility, right? So- and that's what we wanna do, right? So, so [NOISE] you might say, "Oh, I wanna, I wanna stay and then I get my $4, and then I want to quit and then I get 14, and maybe that is the way to go. Um, that could be a strategy, but for doing that, right? Like we are going to actually talk about that. For doing that, we are going to define what would be the optimal policy. One other thing that, uh, for this particular problem, you're going to keep in mind is, I'll, I'll talk about it when, when I define a policy. But, but the policy the way we, we define it is it's a function of state. So if you decide to stay, that is your policy. If you decide to not stay, that is your policy. Like, you're not allowing switching right now. Like, as I talk about this later in the lecture. But, but I'll come back to this problem, okay? So if you- if you decide that your policy, the thing you want to do is to just stay. Uh, keep staying, this is the probability of, like, the total rewards that you are gonna get. So you're gonna get four with some probability. And then if you're lucky, you're gonna get 8. And then even if you're luckier, you're gonna get 12, and if you're luckier, you're gonna get 16. But, but the probabilities are going to come down pretty much like really quickly. So the thing we care about in this setting, is, is the expected utility, right? In expectation, like if I- if I- if I run this, and if I average all of these possible paths that I can do, what would be the value that I get? And for this particular problem, it turns out that in expectation if you decide to stay, you should get 12. So, so you got really unlucky that you got 8. But [LAUGHTER], but in general, in expectation, you should decide to stay, okay? And, and we actually want to spend a little bit of time in this lecture thinking about how we get that 12, and and how to go about computing this expected utility. And, and based on that, how to decide what policy to use, right? Okay. And then if you decide to, to quit, then, then expected utility there is kind of obvious, right? Because that, that, you're quitting and that's with probability of 1 you're getting $10, so you're just gonna get $10 and that is the expected utility of quitting. Yes. [inaudible]. [NOISE] Uh, [NOISE] so, so when you- when I say- when you roll a die, I said if you get one or two- You stay. You, you, you, stay, yeah. And then if you get the other, so the two-thirds of it, you continue. So, so it's a one-third, two-third comes from there, okay? All right. I'll, I'll come back [NOISE] to this example. This is actually the, the running example throughout this lecture [NOISE], okay? So [NOISE]. [inaudible] so how are you able to do this calculation? We're going to talk about that next. That is what the lecture is about. Okay. So let's, let's actually, uh- I do wanna finish it in an hour, that's why maybe I'm rushing things a little bit. But we are going to talk about this problem like throughout the class. So, so don't worry about it. If it's not clear at the end of it, we can clarify things, okay? All right. So I do want to formalize this problem. The way I want to [NOISE] formalize this problem is, er, using an MDP. So I wanna- I wanna formalize this as a ma- as a Markov decision process. Maybe I can [NOISE] just use this [NOISE]. So in Markov decision processes, similar to search problems, you're going to have states. So in this particular game, I'm going to have two states. I'm either in the game [NOISE] or I'm out of the game. So I'm in an end state where everything [NOISE] we ended you're out of the game, you're done, okay? So, so those are my states. Then, um, when I'm in these states, I'm in each of these states, I can take an action. And if I'm in an end state, I can take two actions, right? I can either decide to stay [NOISE], right? Or I can quit [NOISE], okay? And if I, if I decide to stay, from in state, that takes me to something that I'm [NOISE] going to call a chance node. So a chance node is a node that represents a state and action. So it's not really like, like the blue things are my states, but I'm creating this chance nodes as a way of kind of going through this example, to, to see where things are going. So, so the- these blue states [NOISE] are going to be my states. I'm in S. These chance nodes are over state and action. So basically, this node tells me that I started [NOISE] with in, and I decided to stay, okay? And the chance node here, basically tells me that I started with in, and I decided to quit [NOISE], okay? Yes. Why do we still call it a chance node even though it's deterministically? So I deterministically go through it, but then from the chance node that's where I'm introducing the probabilities. So from the chance node I can like probablistically end up in the- these different states. In the case of quit, it's also deterministic. In the case of the quit in this case it's deterministic. Yeah. So in the case of the quit, we say [NOISE] with probability 1 [NOISE], I'm going to end up in this end state. So I am going to draw that with the no- with the- with the edge that comes from my chance node, and I'm gonna say, with probability of 1 [NOISE], I'm going to get $10 [NOISE] and just be done, okay? But if you are in this state, this is actually the state where interesting things can happen with probability two-thirds, I'm going to go back to [NOISE] in, and get $4, or with probability one-third, I'm going to end up in end, and, and do I get still 4, $4 [NOISE] , okay? So, so that is my Markov decision process. So, so I had maybe we can keep track of a list of things we are defining in this lecture. So we just defined states [NOISE], and then we said well, we're gonna have these chance nodes [NOISE] because from these chance nodes probabliistically, we're going to come out of them depending on what happens in nature, right? Like I end up- this is the decision I've made, now nature kind of decides which one you're going to end up at, and, and based on that we, we move forward, okay? All right. So, so more formally, we had a bunch of things when we define an MDP. Similar to search problems, we- like we, we now need to define the same set of things. So, so we have a set of states. In this case my states are in and end, okay? We have a start state. I'm starting with in. So that's my start state. I have actions as a function of states. So when I ask what are the actions of the state, my actions are going to be stay or quit. What are actions of end? I don't have anything, great, end state doesn't have any actions that come out of it. And then we have these transition probabilities. So transition probabilities more formally, take a state, an action, and, and a new state. So S, A, S prime, and tell me what is the transition probability of that, it's one-third in this case. And then I have a reward which tells me how much was that rewarding, that was $4, okay? So, so I'm defining- so when I'm defining my MDP, kind of the new things I'm defining is this transition probability, which tells me if you're in state S, and take action A, and you end up in S prime. What is the probability of that? I'm in in, I decide to stay, and then end up in end. What's the probability of that? That's one-third. Maybe I'm in in, I decide to quit, I end up in end. What's the probability of that? It's equal to 1, okay? And then over the same state action state primes, like next states we are going to end up at, we're going to define a reward [NOISE] which tells me how much money did I get? Or like how, how good was that. So it was $4 in this case. Or, or if I decide to quit, I got $10, okay? Um, and if you remember in the case of search problems, we're talking about cost. I'm just flipping the sign here, we wanted to minimize cost. Here we want to maximize the reward just a more optimistic view of the world I guess. Um, so, so that is what the rewards are going to be defined, okay? We also have this as end function, which again similar to search problems just checks if you're in an end state or not. And in addition to that, we have something that's called a discount factor. It's, it's this value Gamma [NOISE] which is between 0 and 1. And I'll talk [NOISE] about this later don't worry about [NOISE] it right now. But it's a thing to define for our search pro- er, for our MDPs, okay? All right. So how do I compare this with search? Again, these were the things that we had in a search problem. We had the successor function that would deterministically take me to S prime and we had this cost function that would tell me what was the cost of being in state S and taking action A. So, so the major things that are changed is that instead of the successor function, I have transition probabilities these T's, that, that basically tell me what's the probability of starting in S, taking action A, and ending up in S prime. And then the cost just became reward, okay? So, so those are kind of the major differences between search and MDP. Because things are- things are not deterministic here [NOISE], okay? All right, so, so that was the formalism. Now, now I can define any, any MDP model- any Markov Decision Process. And then one thing- just one thing to point out is this transition probability is this t, basically specifies the probability of ending up in state S prime if you take action A in state S. So, so these are probabilities, right? So, so for example again, like we have done this example but let's just do it on the slides again, if I'm in state in, I take action quit, I end up in end, what's the probability of that? 1. And then if I'm in state in, I take action stay, I end up in state in again, what's the probability of that? I end up in again, two-thirds. And then if I'm state in, I take action stay, I end up in end, what is the probability of that? One-third, okay? And then these are probabilities. So what that means is they need to kind of add up to 1, but one thing to notice is well, just what is going to add up to 1? Like, like all of the things in the column are not going to add up to 1. The thing that's going to add up to 1is if you consider all possible these- different s primes that you're going to end up at, those probabilities are going to add up to 1. So, so if you look at this, this sta- stable again, if you look at deciding and being stay in and taking action stay, then the probabilities that, that we have for different s primes are two-thirds and one third, and those two are the things that are going to add up to 1. And in the first case, if you're in stay in and you decide to quit, then wherever- whatever s primes you're gonna end up at, in this case, it's just the end state, those probabilities are going to add up to 1. So, so more formally what that means is, if I'm summing over s primes, these new states that I'm going to end up at, the transition probabilities need to add up to 1. Okay, because they're basically probabilities that tell me what are the- what are the things that can happen if I take an action, okay? And then these transition probabilities are going to be non-negative because they are probabilities. So that's also another property, okay? So usual six. All right. So, so that's a search problem. Let's actually formalize another search problem. This is- let's actually try to code this up. So what is a search problem? This is the tram problem. So remember the tram problem. I have blocks 1 through n. What I wanna do is I have two possible actions, I can either walk from state S to a state S plus 1. Or I can take the magic tram that takes me from state S to state 2S. If I walk, that costs one minute, okay? Means reward of that is minus 1. If I, if I take the tram that costs two minutes, that means that the reward of that is minus 2, okay? And then the question was how- like how do we want to travel from, from 1 to n in the least amount of time? So, so nothing here is, is probabilistic yet, right? So I'm going to add an extra thing here which says the tram is going to fail with probability 0.5. So I'm going to decide maybe you take, take a tram at some point and that tram can, can fail with probability 0.5. If it fails, I end up in my state, like I don't go anywhere. And, and actually like in this case, you're assuming you're still losing two minutes. So if I decide to take a tram, I'm gonna lose two minutes, maybe you'll fail, maybe we will not, okay? All right. So let's try to formalize this. So we're gonna take our tram problem from two lectures ago. So this is from search one. We're gonna just copy that. So all right. So this was what we had from last time. You had this transportation problem and we had all of these algorithms to solve the search problem. You don't really need them because we have a new problem so let's just get rid of them. And now I just want to formalize an MDP. So, so it's a transportation MDP, okay? The initialization looks okay. Start state looks okay. I'm starting from 1, this end looks okay. So the thing I'm going to change is the- first off I need to add this actions function. Okay? So what would actions do? It's going to return a list of actions that are our potential actions in a given state. So I just copy pasted stuff from down there to just edit. So it's going to return a list of valid actions. Okay? So what are the valid actions I can take? I can either walk or I can tram. So I'm going to remove all these extra things that I had from before and just keep it to be I'm either walking or I'm taking the tram, okay? As long as it's a valid state. So, so that looks right for actions. The other thing we had was a successor and cost function. So, so now we want to just change that and return these transition probabilities and end reward. So, so it's basically the successor probabilities and reward. Okay? So I'm putting those two together, similar to before we had successor and cost. Now I'm returning probabilities and reward. Okay? So what this function is going to return is it's going to return this new status S prime, I'm going to end up at and the probability value for that and reward of that. Okay? So, so given that I'm starting in state S and I'm taking action A, then what are the potential S primes that I can end up at and what are the probabilities of that? Then what, what is T of SAS prime and what is the reward of that? What is the reward of SAS prime? I want to have a function that just returns these so I can call it later. Okay? All right. So I need to basically check like for, for each one of these actions, I can for, for action walk. What happens for action walk? What's the new state I'm going to end up at? Well, I'm going to end up at S plus 1. It's a deterministic action. So I'm going to end up there with probability 1 and what's the reward of that? Minus 1 because it's one minute cost, so it's minus 1 reward. Then for action tram, we kind of do the same thing but we have two options here. I can- I can end up in 2S. Tram doesn't fail, I end up in 2S. The probability 0.5 that cause- that reward of that is minus 2 or the other option is I'm going to end up in state S because I didn't go anywhere because we had probability of 0.5, the tram did fail. And that, that- the reward of that is minus 2. And that's pretty much it. That, that is my, my MDP. So I can just define this for a city with let's say 10 blocks. Oh, and we need to have the discount factor but we'll talk about that later. Let's say it's just 1 for now, okay? And they'll use right- I'm writing these other states function for later but, okay. Does that look right? We just formalized this MDP. So let's check if it does the right thing. So maybe we want to know what are the actions from state three? What are the actions from state three? Oh, we need to remove this utility function from before because we don't have it in the folder. So remove that. What, what are the actions from state three? I have 10 blocks. If I'm in state three, I can either walk or tram. Either one of them is fine, right? So, so that did the right thing. Maybe we want to just check if this successor probability and the reward function does the right thing. So maybe, maybe we can try that out for state three and walk. So, so for state three and action walk, then what do we get? Well we end up in four and that is, that is with probability 1 with the reward of minus 1. Okay? Let's try it out for tram. Again, remember tram can fail, so I'm gonna get two things here. So these are the things I'm going to get for tram, I'm going to either end up in six with probability 0.5 with the reward of minus 2 or I will not go anywhere. I'm still at three with probability 0.5 and that is with a reward of minus 2. Okay? All right. So that was just the tram problem and we formalized it as an MDP. Again, the reason it's an MDP is, is that the tram can fail with probability 0.5. So we added that in, then we defined our transition function and our problem- and our reward function. Okay? All right, everyone happy with how we are defining MDPs? Yeah? Okay. Pretty similar to search problems except for now we have these probabilities, okay? All right. So, so now I have defined an MDP, that's great. The next question that in general we would like to answer is to give a solution, right? So there's a question here. So what is the Markov part of an MDP? So the Markov part means that you just depe- so, so when you just depend on the state and this current state, like the way we define our state remember, our state is sufficient for us to make optimal decisions for the future. So the Markov part means that you're Markovian, it only depends on the current state and actions to end up in the probabilistically end up in the next, next state. So yeah. So the interesting question we would like to do is well, we want to find a solution, right? I want to figure out what is the optimal path to actually solve this problem. And again if you remember search problems, the solution to search problems was just a sequence of actions, said that's all I had, like a sequence of actions, a path that was a solution. And the reason that was a good solution was like everything was deterministic, so I could just give you the path and then that was what you would follow. But in the case of MDPs, the way we are defining a solution is by using this notion of a policy. So a policy- let me actually write that here. So we have defined an MDP but now I want to say well, what is a solution of an MDP? A solution of an Markov decision pro- process is a policy pi of S. So and this policy basically goes from states, so it takes any state and it tells me what is the- what is the potential action that I would get for that state. Okay? So, so if a policy is a function, it's a mapping from each state S in the set of all possible states, to, to an action and the set of all possible actions. Okay? So in the case of the volcano crossing, like I can have something like this. I can be in state 1, 1 and then a policy of that state could be going south, okay? Or I can be in state 2, 1 and a policy for that state is east. If, if this was a search problem, I would just give a path. I would just say go south and then to- go east and go north, right? So, so that would be my solution. But- but again, like if I decide that well the policy at 1, 1 is to go south, there is no reason for you to end up at south, right? Because this thing, this thing is probabilistic. So, so the best thing I can do is for every state just tell you what is the best thing you can do for that particular state and, and that's why we are defining a policy as opposed to ge- giving like a full path, okay? All right, so policy is the thing you're looking for. And ideally, I would like to find the best policy that would just give me the right solution. But in order to get there, I want to spend a little bit of time talking about how good a policy would be. So and that's kind of this idea of evaluating a policy. So in this middle section, I don't want to try to find a policy, I, I just assume you give me a policy and I can evaluate it and tell you how good that is. So, so that's the plan for the middle section, okay? All right. Everyone happy with- so, so far all I've done is I've defined an MDP, which is very similar to a search problem, it's just probabilistic. Okay? So so how would we evaluate a policy? Okay? So if you give me a policy which basically tells me at every state S, take some action, then that policy is going to generate a random path, right? I can get multiple random paths because nature behaves differently and the world is uncertain. So I might get a bunch of random paths and then those are all random variables, uh, random paths, sorry. And, and, and then for each one of those random paths, I can, I can define a utility. So, so what is the utility? Utility is just going to be the sum of rewards that I'm going to get over that path. I'm calling it as, as the discounted sum of the rewards. Remember that discount, we'll talk about that but, but you can- you can discount the future. But, but for now just assume it's just a sum of the rewards on that path, okay? So a util- the utility that we are going to get is also going to be a random variable, right? Because if if you think about a policy, a policy is going to generate a bunch of random paths and and utility is just going to be the sum of rewards [NOISE] of each one of those. So it's a random variable. So, so if you remember this example, right? So I can, I can basically have a path that tells me start in in, and then stay and then that ends. Right? So so this is one random path, and for this particular random path, well, what is the utility I'm gonna get? I'm just gonna get $4. That's one possible thing that can happen. If my, if my, um, policy is to let's say stay, like there is no reason for for the game to end right here. Right? Like I can have a lot of different types of random path. I can have a situation where I'm staying three times and then after that ending the game and utility of that is 12. We can have this situation where we have stay, stay, and end. That's the situation it's all, like you had, you had an utility of eight and so on. So, so you're getting all these utilities for all these random paths. So, so these utilities are also going to be just random variables. Okay? So I can't really play around with the utility. That's not telling me anything. Although it's telling me something but it's a random variable. I can't optimize that. So instead we need to define something that you can actually play around with it and, and that is this idea of a value which is just an expected utility. So, so the value of a policy, is the expected utility of that policy. And then that's not a random variable anymore, that's actually like a number and I can I can compute that number. I can compute that number for every state and and then just play around with value. Okay, next question? What is the value of the policy, does, is that policy needs defined for all possible states or a particular state? For all possible. So so the question is, yeah, so when you say value of policy, uh, is the policy basically telling me, um, is a policy basically telling me, uh, what- what is a strategy for all possible states? Well, um, you're defining policy as a function of state, right? So, and value is the same thing as a function of state. I might ask what is the value of being in in? So the value of being in in is, is, and, and, following, and following policy stay, is, is going to be the, the value of fo- following policy stay from this particular state which is the expected utility of that, which is, which is basically that 12 value there. I could ask it for about any other state too. So I can be in any other state and then say well, what's the value of that? And, and when we do value iteration and you actually need to compute this value for all states to kind of have an idea of how to get from one state to another state but [OVERLAPPING]. [inaudible] will be in state in and the policy given your state in taking the actions stay. Yes. Okay. Yeah. And that is, that is what 12 is. Okay? And 12 like we kind of empirically we have seen, it's 12 but we haven't shown how to get 12 yet. Okay? All right. So, um, actually let me write these in my lists of things. So we talked about the policy. What else did we talk about? We talked about utility. So what is utility? Utility, we said it's sum of rewards. [NOISE] So if I get like reward 1, then I get reward2two. It's a discounted sum of rewards. So I'm gonna use this gamma which is that discount that I'll talk about in a little bit times reward 2, plus gamma squared times reward 3, and so on. So utility is, you give me a random path and I just sum up the rewards of that. Imagine if gamma is 1, I'm just summing up the rewards. If gamma is not 1, I'm summing- I'm looking at this this discounted sum. Okay, so, so that is utility. But value- so this is utility, value is just the expected utility, okay? So you give me a bunch of random paths, I can compute their utilities, I can just sum them up and average them and that gives me value. Yes. If the discount factor is 1, would that be bounded? That's a very good question and we'll get back to that. So, so, so in general, and, and, and. Okay. If if it is acyclic, it is fine, but if you have a cyclic graph you want your gamma to be less than 1. And we'll talk about that when we get to the convergence of these algorithms. All right, how am I doing on time? Okay. All right. So so let's go to the, uh, this particular volcano crossing example. Um, so in this case, um, like I can run this game, and every time I run it, I'm gonna get a different utility because like I'm gonna end up in some random path, some of them end up in the volcano, that's pretty bad, right? So I get different utility values, utilities [LAUGHTER] but the value which is the expected utility is not changing really. It's just around 3.7 which is just the average of these utilities. So I can keep running this getting these different utilities, but values is one number that, that I can, I can talk about and, and that's the value of this particular state and that tells me like what would be the best policy that I can take and what's the best amount of utility that I can get from in expectation from that state? Okay? All right, so we've been talking about this utility I've actually written that already on the board. So utility is going to be a discounted sum of rewards. And then we've been talking about this discount factor. And the ideal of the discount factor is I might like care about the future differently from how much I care about now. So, so for example, if if you give me $4 today, and you give me $4 tomorrow, like if that $4 tomorrow is the same kinda amount and has the same value to me as as today, then then I might, it's kinda the same idea of having a discount counter of 1, uh, discount of, of 1, gamma of 1. So you're saving for the future, the values of things in the future is the same amount. If you give me $4 now, if you give me $4 10 years from now, it- it's going to be $4. I care about it like $4 amount and I can just add things up. But it could also be the case like you might be in a situation, in a particular MDP, where you don't care about the future as much. Maybe you give me $4 10 years from now and that's that doesn't like, I don't have any value for that. So, uh, if then that is the case and you just want to live in the moment and you don't care about the values you're gonna get in the future, then that's kind of the other extreme when- when this this gamma, this discount is equal to 0. So so that is a situation that if I get $4 in the future, that they don't like val- like they don't have any value to me. They're just like a 0 to me. So, so I only care about right now living in the moment what is the amount I'm going to get. And then in reality you're like somewhere in between, right? Like we're not this, this case where we are living in a moment, we're also not this case that, that everything is just the same amounts like right now or in the future in- and like in balanced life as a setting where we have some discount factor, it's, it's not 0, it's not a 1, it actually discounts values in the future because future maybe doesn't have the same value as now but, um, but we still value things in the future like $4 is still something in the future. And, and that's where we pick like a gamma that's between 0 and 1. So so that is kind of a design choice like depending on what problem you're in, you might want to choose a different gamma. Question, yeah. So is discounting utility, is it an assessment of risk or is there, like, a different way we can assess how much risk you want to take? Um, you could, you could think of it as an, it's not really an assessment of risk in that way. It depends on the problem, right? It depends on like in a particular problem, I do want to get values in the future or have like some sort of long term like goal that I want to get to and I care about the future. Like it it depends, like, if you're solving a game versus you're solving like, I don't know, mo- mo- mobile like a robot manipulation problem like it might just be a very different like discount factor that you would use. For a lot of examples we'd use in this class, we just choose a gamma that's close to 1. Like- like usually like for a, for a lot of problems that we end up dealing with gamma it's like 0.9. That's like the usual. Okay, like for usual problems. Like you might have a very different problem where we don't care about the future. So, so then we just drop it. Yes. [inaudible] is gamma a hyperparameter that needs to be tuned and is a gamma 0 the same as a 3D algorithm? Gamma. Okay. So so that's a good question. So is- is gamma a hyperparameter that you need to tune? I would say gamma is a design choice. It's not a hyperparameter necessarily in that sense that, oh if I pick the right gamma that will do the right thing. You want to pick a gamma that kind of works well with your problem statement. Um, and, and gamma of 0 is kind of greedy, like you are picking like what is the best thing right now and I just don't care about the future ever. Question right there. Does gamma violate the Markov property because like this kind of memory of what you save is. It doesn't violate the Markov property. It's just a discount of like your- it's about the reward. It's not about how this state affects the next state. It basically affects how much reward you're going to get or how much value you reward in the future. It doesn't, it doesn't actually like- it's still a Markov decision process. [inaudible] and make your possible actions [inaudible]? What you are getting with- it's affecting the reward yeah, but it's Markov because if I'm in state s and I take action a, I'm gonna end up in s prime and that doesn't depend on like gamma. Okay. All right. So. Okay. So, so in this section we've been talking about this idea of someone comes in and gives me the policy. So the policy is pi and what I want to do is, I want to figure out what's the value of that policy, and again value is just the expected utility. Okay? So V pi of s is the expected utility received by following this policy pi from state s. Okay? So, so I'm not doing anything fancy. I'm not even trying to figure out what pi is. All I want to do is, I want to just evaluate. If you tell me this is pi, how good is that? What's the value of that? Okay? So, so that's what a value function is. So value of a policy is, is V pi of s. Okay? That's expected utility of starting in some state, um, let me put this here and then I'm going to move these up. [NOISE] Um, yeah, yeah so V pi is, is the value- the expected utility of me starting in some state S. Okay. And state S has value of pi of S. And if someone tells me that, well you're following policy pi, then I already know from state S, the action I'm going to take is pi of S. So that's very clear. So I'll take pi of S. And if I take pi of S we'll- I'm going to end up in some chance node. Okay. And that chance node is, is a state action node. It's going to be S and the action- I've decided the action is pi of S. Okay. And of this- define this new function, this Q function, Q pi of S, a, which is just the expected utility from the chance node. Okay. So, so we've talked about value, value is expected utility from my actual states. I'm going to talk about Q values as expected utilities from the chance nodes. So after you've committed that you, you have taken action a, and, and you're following policy pi. Then, what is the expected utility from that point on, okay. And well what is the expected utility from this point on? We are in a chance node, so many things that can happen because I have like nature is going to play and roll its die, and anything can happen. And they're going to have in transition, S, a, S-prime and with that transition probability, I'm going to end up in a new state. And I'm going to call it S-prime, and the value of that state- again, expected utility of that state is V pi of S-prime, okay. All right. So, okay. So what are these actually equal to? So I've just defined value as expected utility, Q value as expected utility from a chance node, what, what are they actually equal to? Okay. So I'm going to write a recurrence that we are going to use for the rest of the class. So pay attention for five seconds. There is a question there. I understand how semantically how pi and v pi are different, in like actual numbers, like expected value- how are they different? So they're- both of them are expected value. Yeah. So it's just- one is just a function of state the other one you've committed to one action. And the reason I'm defining both of them, is to just writing my recurrence is going to be a little bit easier, because I have this state action nodes, and I can talk about them. And I can talk about how like I get branching from these state action nodes, okay? All right. So I'm going to write a recurrence. It's not hard, but it's kind of the basis of the next like N lectures, so pay attention. So alright. So V pi of S, what is that equal to? Well, that is going to be equal to 0, if I'm in an end state. So if IsEnd of s is equal to true, then there is no expected utility that's equal to 0. That's a easy case. Otherwise- well, I took policy pi S. Someone told me, take policy pi S. So value is just equal to Q, right? So, so in this case, V pi of S, if someone comes and gives me policy pi, it's just equal to Q pi of S, a. Okay. These two are just equal to each other. So the next question one might ask is- actually let me write this a little closer so I'll have some space. Yeah. So this is equal to Q pi of S, a, okay. So, so what is that equal to? What is Q pi of S, a equal to? So this is V pi S. So now, I just want to know what is Q value, Q pi of S, a. What is that equal to? Okay. So if I'm right here then there are a bunch of different things that can happen, right? And I can end up in these different S-prime. So if I'm looking for the expected utility then I'm looking for the probability of me ending up in this state times the utility of this state, plus the probability of me ending up in a new state times the utility of that. So, so that is just equal to sum over all possible S-primes that I can end up at of transition probabilities of S, a, S prime. Transition probability of ending of a new state, times the immediate reward that I'm going to get, reward of S, a, S prime, plus the value here. But I care about the discounted value. So I'm going to add gamma V pi of S-prime, because I've been talking about this, this next state. Okay. There's this, does everyone see this? Okay. So this is the recurrence that we are doing in policy evaluation. Again, remember someone came and gave me policy pi. So I'm writing this policy pi here. Someone gave me policy pi, I just want to know how good policy pi is. I can do that by computing V pi. What is V pi equal to? Someone told me you're following policy pi, so it's gotta be equal to just Q pi. What is Q pi equal to? It's just sum of all the- like the expectation of all the places that I can end up at that sum over S-primes, transition probabilities of ending up in S-prime, times the reward- the total reward you're getting which is the immediate reward, plus discounting in my future, okay. Yes. What if Q values and then following policy pi starting from S-prime? Yes. Yeah, yeah, yeah, starting from S-prime. All right. So okay. So far so good. So so that is how I can evaluate this policy, right? So, so I have these two recurrences- if I have these two recurrences, I can just replace this guy here, and let's imagine we're in the case- maybe I can use a different color up here. Um, I'm just replacing, I'm just replacing this guy right here. I don't know if it's worth writing it. Imagine we we're not in an end state. If you're not in an end state then V pi of S, well, what is that equal to? That is just equal to sum of transition probabilities S, a, S-prime, over S-primes, times immediate reward that I'm going to get, plus discounting V pi of S-prime. Okay. So this is kind of a recurrence that I have. I, I literally just combined these two, and wrote it in green, okay, if you're not in an end state. So if you're not in an end state, this is the recurrence I have. I have V pi here, I have V pi on this side too. So that is nice. And, and that is kind of the, the placer. I can compute V pi. Maybe I can do it literally or maybe I can actually find a closed form solution for some problems, but that is basically what I'm going to do. I have V pi as a function that depends on V pi of S-prime. And I can just solve for this V pi. Okay. It allows me to evaluate policy pi. I haven't figured out a new policy. All have done is evaluating what's the value of pi, okay. All right. Okay, so let's go back to this example. So let's say that someone comes in and tells me well the policy you gotta follow is, is to stay. So my policy is, is to stay. Okay. I want to know- I want to just evaluate that, I want to do policy evaluation. When you're doing policy evaluation, you gotta compute that V pi for all states. So let's start with V pi of end, oh that is equal to 0, because we know V pi at end state is just equal to 0. Now, I want to know what's V pi of in, okay stay, in. What is that equal to? That's just equal to Q pi of in and stay, right? V pi is just equal to Q pi of in and stay. So I'm going to replace that, that's just equal to one-thirds, times immediate reward, which is 4, plus value of the next state I'm going to end up at, which is end in this case, plus two-thirds, times the immediate reward I'm going to to get, which is $4, plus value of the state I'm going to end up at, which is end. Okay. So, so that is just that sum that we have there, right? V pi of end is 0, so let me just put that 0 there. I'm going to put 0 there. I only have one state here too, right? So, so th- I just have this other function of this one, stay, in. So having an equation, I can find the closed form solution of V pi of in. I'm just going to move things around a little bit. And then I will find out that V pi of in is just equal to 12. So, so that's how you get that 12 that I've been talking about. So, so you just found out that if you tell me the policy to follow is stay, if that is the policy, then the value of that policy from state in is equal to 12. Is it you always choose the same or- so you always choosing to state. Yeah. So, so the policy is a function of state. I only have this one state that's interesting here, right? That, that one state is in. So I need to- when, when I defined my policy, I need to kind of choose the same policy for, for that state, right? My policy says, in in you've got to either stay or you've got either quick- quit. Okay. All right. So you can basically do the same thing using an iterative algorithm too. So, so here like in the previous example, it was kind of simple. I just solved the closed form solution. But in, in reality like you might have different states and then the com- it might be a little bit more complicated. So we can actually have an iterative algorithm that allows us to find these V pis. So the way we do that is, we start with the values for all states to be equal to zero. And, and this zero that I- I've put here, is the first iteration. So, so I'm going to count my iterations here. So, so I'm going to just initialize all the values for all states to just be equal to zero. Okay. Then I'm just going to iterate for some number of time, whatever number I care, like I would like to. Then, what I'm going to do is, for every state- again, remember the value needs to be computed for every state. So for every state, I'm going to update my value by the same equation that I have on the board, okay? And the same equation depends on the value at the previous time step. So this is just an iterative algorithm that allows me to compute new values based on previous values that I've had. And I started like everything zero and then I keep updating values of all states and I keep going, okay? So basically, that equation but think of it as like an iterative update every round. So you- you don't run this for multiple rounds. Every round you just update your value. Okay. So like here, is just pictorially you're looking at it, imagine you have like, five states here, you initialize all of them to be equal to 0. The first round, you're going to get some value you're going to update it. And then you're going to keep running this and then eventually, you can kind of see that the last two columns are kind of close to each other and you have converged to the true value. So, so again, someone comes and gives you the policy, you start with values equal to 0 for all the states, and then you just update it based on your previous value. Okay. So how long should we run this? Well, we have a heuristic to- to kind of figure out how long we should run this particular algorithm. Uh, one thing you can do is you can kind of keep track of the difference between your value at the previous time step versus this time step. So, so if the difference is below some threshold you can, kind of, call it- call it done and- and say, well I've- I've found the right values. And then in this case, we are basically looking at the difference between value at iteration T versus value at iteration T minus 1. And then we are taking the max of that over all possible states, because I want the values to be close for all states. Okay. Yes. [inaudible] Is this- so I'm going to talk about the convergence when we talk about the gamma factor and- and- and the- the discount factor and acyclicity. Um, also how long you should run this to get these is also a difficult problem and it depends on the properties of your MDP. So if you have an ergodic- if you have an ergodic MDP if this- this should work. Okay, but in general, it's a hard problem to answer for general Markov decision problem processes. Okay. And another thing to notice here is, I'm not storing that whole table. Like the only thing I'm storing, is- is the last two columns of this table because- because that's V pi at iteration T and V pi at iteration T minus 1. Those are like, the only things I'm storing, because that allows me to compute and if I've converged then that kind of allows me to keep going because I only need my previous values to update my new values, right. In terms of complexity, well this is going to take order of T times S times S prime. Well, why is that? Because I'm iterating over T times step, and I'm iterating over all my states and I'm summing over all S primes, right. So because of that- that's a complex idea yet, and one thing to notice here, is it- it doesn't depend on actions, right. It doesn't depend on the size of actions. And the reason it doesn't depend on the size of actions as you have given me the policy, you are telling me follow this policy. So if you've given me the policy then I don't really need to worry about, like, the number of actions I have. Okay. All right. Um, here is just another like the same example that we have seen. So at iteration T equal to 1, in, is going to get 4, end is going to get 0, at iteration 2 it gets a slightly better value. And then finally, like at iteration, like, 100 let's say, we get the value 12. And then remember for this particular example, like, this example we were able to solve it, like, solve the closed form V- V- of, ah, V- V of policy staying, uh, from state, in, but, uh, but you could also run the iterative algorithm and get the same value of 12. Okay. Yes. Number of actions is just the size of S prime, right? The number of, uh, actions is the size of S prime. Uh, no because the size of S- you might end up in very different, different states. This depends on your probabilities. Oh, okay. The size of S prime is actually the size of, like, size of states is the same thing, right? Like it's you can- worst case scenario, you're going from every state to every state. So just imagine the size of S. [NOISE] Okay. All right. So summary so far where are we? So we have talked about MDPs. These are graphs with states and chance nodes and transition probabilities and- and rewards. And you have talked about policy as the solution to an MDP, which is this function that takes a state and gives us an action. Okay. We talked about value of a policy. So value of a policy is the expected utility of- of that policy. So, so if you have like utility you- we have these random values for all these random paths that you're going to get for every policy. The value of utility is just an expectation over all those random, random variables. And so far we have talked about this idea of policy evaluation, which is just an iterative algorithm to compute what's the value of a state. If you'd give me some policy, like, how good is that policy what's the value I'm going to get at every state. Okay. All right. So- okay, that has been all assuming you'd give me the policy. Now, the thing I want to spend a little bit of time on is- is figuring out how to find that policy. Uh, is that possible that the variable actions for problem that is going to change the value of the policies. We learn new actions. So for example here, we only have stay or quit. Uh-huh. If you have a different problem that they can learn another action, like, stay quit or something, uh, um, the trade. Is it going to change the value of the policies because then we had a new action and then we need to update our policies? So in this case so, so far I'm assuming that a set of actions is fixed. I am not like adding new actions, like, the way- even with search problems, like, the way we defined search problems or the way we are defining MDPs is I'm saying, like, I'm starting with a set of states are fixed, actions are fixed, I have stay and quit. Those are, like, the only actions I can take. Uh, the reward is fixed, uh, transition probabilities are fixed under that scenario. Then what is best- the best policy I can take and best policies is just from those set of like, def- already defined actions. Okay. Um, next lecture we will talk about unknowing settings, like when we have transition probabilities that are not known or reward functions that are not known and how we go about learning them. And, and that- that will be the reinforcement learning lecture. So next lecture I might address some of those. Okay. All right, so let's talk about value iteration. So, so that was policy evaluation. So like, that whole thing was policy evaluation. So now, what I would like to do is I want to try to get the maximum expected utility and find the set of policies that gets me the maximum expected utility, okay? So to do that I'm going to define this thing that's called an optimal value. So instead of value of a particular policy, I just want to be opt of S, which is the maximum value attained by any policy. So, so you might have a bunch of different policies, I just want that policy that maximizes the value. Okay. So and that is V opt of S. Okay. So, um, so let me go back to this- this example. So I'm going to have this in parallel to this example of policy evaluation, I want to do value iteration. Okay. So I'm going to start from state S again, state S has V opt of S. Okay. That is what I like to find here I have V pi of S. If I'm looking for V opt of S, then I can have multiple actions that can come out of here and I don't know which one to take, but like, any of them- if I take any of them, if I take this guy, that takes me to a chance node of SA. Okay. And then I'm looking for Q opt of SA. And from here, it's actually pretty similar to what we had right here. So I'm in a chance node, anything can happen, right? Nature plays and with some transition probability of SA, S prime I'm going to end up in some new state S prime and I care about V opt of that S prime. Okay. So if I'm looking for this optimal policy which comes from this optimal value, then I need to find V opt. And if I want to find V opt well, that depends on what action I'm taking here. But let's say, I take one of these. And if I take one of these I end up in a chance node, I have Q opt SA in that chance node. And then from that point on with whatever probabilities I can end up in some S prime. Okay. So I want to write the recurrence for this guy similar to the recurrence that we wrote here. It's going to be actually very similar. So- okay, so I'm going to start with Q because that is easier. So what is Q opt of SA that- that just seems very similar to this previous case. What is that equal to? What was Q pi? Q pi was just sum of transition probabilities times rewards, right. So, so what is Q opt? [inaudible]. Yeah. So, so it would just be basically this equation except for I'm going to replace V pi with V opt. So, so from Q opt, I can end up anywhere like based on the transition probabilities. So I'm going to sum up over S primes and all possible places that I can end up at. I'm going to get an immediate reward which is RSA S-prime. And I'm going to discount the future but the value of the future is V opt of S-prime. Okay. So, so far so good that's Q opt. How about V opt. What is that equal to? Well, it's going to be equal to 0 if you are in an end state that's similar to before. So if end of S is true then- then it is 0. Otherwise, I have- I have a bunch of options here, right. I can take any of these actions and I can get any Q opt. So which one should I pick? Which Q opt should I pick? The one that maximizes, right? Like, um, which actually I should pick an action from the set of actions of that state that maximizes Q opt. So, so the only thing that has changed here is before someone told me what the policy is, I just took the Q of that. Here I'm just picking the maximum value of Q and that actually tells me what action to pick. So what is the optimal policy? What should be the optimal policy? Hmm? I'm going to call it pi opt of S. What is that equal to? It's gotta be the- the thing that maximizes V, right. Which is the thing that maximizes this- this- this Q. So because that gives me the action. So it's going to be the argmax of Q opt of S and A. Where A is an action of S. Okay? All right, so this was policy evaluation. Someone gave me the policy. With that policy I was able to compute V, I was able to compute Q, I was able to write this recurrence, then I had an iterative algorithm to do things, This is called value iteration. This is to find the right policy Iteration. This is to find a policy. How do I do that? Well I have a value that's for the optima- optimal value that I can get and it's going to be maximum over all possible actions I can take of the Q values and Q values are similar to before. So I have this recurrence now and at optimal policy is just an argmax of Q. Yeah. It looks like there are two argmax, right? Sorry? What? Phi for argmax like just two argmax, right, like there are two As? Oh, yes. You could get two A's, So the question is, yeah, like, what if I have two A's that give me the same thing? I can return any of them. It depends on your implementation of max. So you can return any of them. How am I doing on time? [NOISE] We are five minutes over and if you want. [LAUGHTER] So good news is the slides are the same things that I have on the board. So so Q_opt is just equal to the sum that we've talked about V_opt. I just add the max on top of Q_opt same story, okay? And then if I want the policy, then I just do the argmax of Q_opt and that gives me the policy. Right. I can have and again an iterative algorithm that does the same thing. It's actually quite similar to the iterative algorithm for policy evaluation. I just start setting everything to equal to 0. I iterate for some number of times. I go over all possible states. And then, I just update my value based on this new recurrence that has a max, okay? So very similar to before, I just do this update. One thing is the time complexity is going to be order of T times S times A times, S prime because now I have this max value over all possible actions. So I'm actually iterating over all possible actions versus in policy evaluations, I- I didn't have A, because, because someone would give me the policy. I didn't need to worry about this. All right. So let's look at coding this up real quick. Okay, so we have this MDP problem. We define it, it was a Tram problem, it was probabilistic, everything about it was great. So now I just wanna do an algorithm section and inference section where I code up value iteration and I can call a value iteration on this MDP problem to get the best optimal policy. Okay. So I'm going to call value iteration later. All right. So we initialize, so all the values are going to become- I might skip things to make this faster. So we're gonna initialize all the values to just 0, right, because all these values are gonna be 0. So I defined a states function. So i for all of those the value is just going to be equal to 0. So it's initialized with that. Then you're just gonna iterate for some number of time. And what we wanna do is you wanna compute this new value given old values. So it's an iterative algorithm. We have old values, we just update new values based on them. So what should that be equal to? So we iterate over our states. If you are in an end state then what is value equal to? 0, right? If you're not in an end state, then you're just gonna do that- that- that recurrence there. Okay, So new value of a state is going to be equal to max of, what the Q values, okay. So new V is just max of Qs of state and actions. Okay. So now I need to define Q or what does Q do? Q of state and action is just equal to that sum over- over S primes. So it's gonna return sum and it's gonna return sum over S primes. I define this successor probability and reward function that gives me newState probability and reward. So I'm gonna iterate over that and- and call that up here. So given that I have a state and action I can get newState probability and reward. What are we summing, you're summing the probability, the transition probabilities times the immediate reward which is reward here times my- plus my discount times my V which is the old value of V over S prime, over my newState. So that is my Q, that is my V, and that's pretty much done. We just need to check for convergence. To check for convergence, we kind of do the same thing as before. We check if value of V and new V are close enough to, to each other that we can call it done. I'm gonna skip these parts. So- so you can basically check if V minus new V are within some threshold for- for all states. And if they are then, V is equal to new V. We need to read the policy. So policy is just argmax of Q. So I'm gonna make this a little faster. So the policy is just going to be, well, none if we're in an end state and otherwise it's just going to be argmax of- of our Q values. So I'm just writing argmax here pretty much. I'm just returning the action that maximizes the Q. And then we spent a bunch of time getting the printing working. So let me actually get. Yeah, okay. All right actually right here. So I'm running this function. I'm- I'm writing out, actually these are a little shifted grid. States [LAUGHTER] values and then Pi which is the policy K. So it starts off walk, walk, walk. Remember this is the case where we have 50% probability of tram failing and with 50% probability of tram failing, these are the values we are gonna get. And the policies still walk until state five. And then take the tram from, from state five. Okay, just kind of interesting because the policy of the search problem was the same thing too. Okay, so the thing we can do is, we can actually, let me move this a little bit forward. We can actually define this fail probability which becomes just a variable. So you can play around with this. If you pick different fail probabilities you're gonna get different policies. So for example if you pick a fail probability that is large then probably like that policy is going to be just, just walk and never take the tram because the tram is failing all the time. But if you- if you decide to take fail probability is close to 0, then- then this is your optimal policy which is close to the search problem. It's basically the solution to a search problem. So play around with this, the code is online. This was just value iteration- value iteration, um, on this tram problem. Okay. So I'm gonna skip this one too. All right, so yeah. And- and this is also showing like how over multiple iterations you can kind of get to the- get to the optimal- optimal value and optimal policy using value iteration. So in one iteration it hasn't seen it yet. So it think that the value, optimal value is 1.85, it hasn't updated the values. And so with like, I don't know, three iterations, it gets better but it hasn't still updated. It still thinks it can't get to the other side. And remember this is with stick probability of 10%. But if I get to like I think 10, then it eventually learns the best policy is to get to 20 and the value is 13.68. And if you go even like higher iterations after that point it's just fine-tuning. So the values are around 13 still. So you can play around with the volcano problem. Okay. So when does this converge? So if the discount factor is less than 1 or your MDP graph is acyclic then this is going to converge. So if MDP graph is acyclic that's kind of obvious you are just doing dynamic programming over your full-thing. So- so that's going to- that's going to converge. If you have cycles, you- you want your- your discounts to be less than 1. Because if you're, if you have cycles and your discount is let's say 1 and let's say you are getting 0 rewards from, then you're never going to change. You're never going to move, you move from your state. You're always going to be stuck in your state. And if you have non-zero rewards you're going to get this unbounded reward and you keep going because you have cycles and it's just going to end up becoming numerically difficult. So just a good rule of thumb is pick a Gamma that's less than one. Then you kind of get this convergence property. Okay, all right, so summary so far is we have MDPs. Now, you've talked about finding policies, rather than path, policy evaluation is just a way of computing like how good a policy is. And the reason I talked about policy evaluation is there's this other algorithm called policy iteration which uses policy evaluation and we didn't discuss that in the class. But it's kind of like, not equivalent but you could use it in a similar manner as value iteration. It has its pros and cons. So policy evaluation is used in those settings. Do not leave please. We have more stuff to cover. [LAUGHTER] And then we have value iteration, uh, which, uh, computes its optimal value which is the maximum expected utility, okay? And next time, we're going to talk about reinforcement learning, and that's going to be awesome. So let's talk about unknown rewards. All right. So that was MDPs [LAUGHTER] doing inference and, and kind of defining them. I'm going back to the last lecture just to kind of talk about some of the stuff that we didn't cover last time, okay? All right. So if you remember last time, we were talking about search problems. So big switch now. Search problems, where we don't have probabilities, and we talked about A-star as a way of just making things faster, and we talked about this idea of relaxations which was, uh, a way of finding good heuristics. So A-star had this heuristic. Heuristic was an estimate of future costs. We wanted to figure out how to find these heuristics, like, how do you go about finding these heuristics? And one idea was just to relax everything, that allows you to come up with an easier search problem or just an easier problem, and that helps you to find what the heuristic is, okay? So, um, [NOISE] so we talked about this idea of removing constraints, and when you remove constraints, then you can end up in nice situations. Like in some settings, you have a closed-form solution. In some other settings, you have just an easier search problem, and you can solve that, and in some other settings, you have like independent sub-problems. So when you remove constraints then, then you have this easier problem. You can solve that easier problem, and that gives you a heuristic. You're not done yet, right? You're- you have a heuristic. You take that heuristic, and then change your costs, and then just run uniform cost search on your original problem. So, so solving an easier problem is like you're not done when you have solved the easier problem. It just helps you to find a thing that helps for- with the original problem, so it's kind of like a multi-step thing. So examples of that is, if you have walls, remove all the walls, you have an easier problem. If you solve that easier problem, that gives you a heuristic, and in this case, like when you knock down these walls, that easier problem you have a closed-form solution for it. You don't need to do anything fancy. You don't need to do uniform cost search. Any of that. You just compute the Manhattan distance and, and then that gives you the heuristic. With that heuristic, you go and solve the original problem. That was one example. Another example is, when you remove constraints, you have an easier search problem. So you don't have closed-form solutions, but you have an easier search problem. So you might have a really difficult search problem with a bunch of constraints that are hard to do. Remove the constraints. So when you remove the constraints, you have a relaxed problem, which is just the original problem without the constraint. That's a search problem. You can solve that search problem using uniform cost search or dynamic programming and, and solving that allows you to find the heuristic. Again, you're not done yet, right? You take the heuristic, and then you go to the original problem, change the costs, and, and draw your uniform costs there. And just one quick kind of example here was, uh, when you're computing these relax problems, the thing you want to find is the future costs of this, this relaxed problem, and, and to do that, you have this easier search problem. You still need to run uniform cost search or dynamic programming. In this case, if you decide to run uniform cost search, remember, uniform cost search computes past costs. In this case, I really wanna compute future costs. So you need to do a bunch of engineering to get that working. In this particular case, the relaxed problem, you need to reverse it. Because when you reverse it, past costs of the reversed relax problem becomes future cost of the relaxed problem, if that makes sense. So, so the way I'm reversing this is I'm basically saying start state is n. End state is 1, and my walk action takes me to s minus 1, instead of s plus 1, and my tram action takes me to s over 2 instead of S times 2, and the whole reason I'm doing that is- is that the past cost of this new problem is the future costs of the non-reversed version. Okay. Because I, I need to use uniform cost search here, okay? So I run my uniform cost search, that gives me a heuristic, and that heuristic gives me this future cost of the relaxed problem, and everything will be great. Another example is, I can have independent subproblems using my heuristic. So in this case, like we have these tiles, they technically cannot overlap. Instead, what we are allowing is, you're allowing them to overlap. So if we allow them to overlap, I have eight independent subproblems that I can solve. These subproblems give me heuristics, and I can just go with them, okay? So, so these were just a bunch of examples, and kind of the key idea was reducing edge, li- like when we are coming up in these relaxed problems, we're reducing edge costs from infinity to some finite costs. Okay. So I'm getting rid of walls before I couldn't cross, like it was infinity. Cost of that was infinity, but if I get rid of the wall and making it a finite cost. So this type of method, um, this is a general framework. So the point I wanna make is, generally, you can talk about the relaxation of a search problem. So if you have a search problem P, a relaxation of a search problem, I'm going to call that PR, uh, Prel, is going to be a problem where the cost of the relaxation for any state action is less than or equal to cost of state and action. I'll take questions afterwards. All right. So, uh, so that is a relaxed problem, okay? So the cool thing about that is, if you're given a relaxed problem, then you can pick your heuristic to be the future cost of the relaxed problem, and that is called the relaxed heuristic, okay? So, so this is kind of a recipe. A general framework. Like, if someone asks you find a good heuristic, find a relaxed problem, future cost of the relaxed problem is a heuristic. And the cool thing about that is it turns out that, that, that future cost of the relaxed problem, which you are deciding to be a heuristic, is also consistent because we talked about all these consistency properties, and how you want to find the heuristic to be consistent for the solution to be correct, and how in the world am I gonna find a consistent heuristic? Well, here is one. Here is one way of finding consistent heuristics. Pick your problem, make it relaxed. Making it relaxed means that pick your cost that's less- pick, pick your relaxed problem where the cost is less than the cost of the original problem, and then future cost of that relaxed problem is just going to be a heuristic, and, and it's going to be consistent. So proof of that is two lines, so I'm going to skip that. And, and the cool thing about this like, like note about this is, there is a trade-off here. There is a trade-off between efficiency and tightness. So, sure, like making things relaxed and removing constraints. It's kinda fun, right? We have this easier problem, and you just solved it, and everything is great about it. But it's not like, like there is kind of a trade-off between how tight you want your heuristic to be. Like, you shouldn't remove too many constraints, because if you remove too many constraints, then your heuristic is not a good estimate of future costs. Remember, your heuristic is supposed to be an estimate of future costs. So, so if it is not a good estimate of future costs and it's not tight, then it's not that great. So, so there is a balance between how much you are removing your cons- your constraints and, and how that makes finding the heuristic easier, versus the fact that you want your heuristics to be tight and be close to your future costs, so, so don't remove everything. Leave some constraints [LAUGHTER] and then solve it. Um, and you can also do things, like if you have two heuristics that are both consistent, you can take the max of that, and if you take the max of that, it's, it's a little bit more restrictive. Maybe, maybe that is closer to your future costs, and that is- and then you can actually show the max of that, is also consistent, okay? Uh, so we talked about, uh, like relaxations A-star. One other quick thing I want to mention because that wasn't very clear last time, is structured das- perceptron. We talked about that a little bit too, and we talked about convergence of that. So quick things on that. Structured perceptron actually converges. There was this question that, uh, if we have- if that- if, if we have a path, that is let's say walk, tram, and, and we end up recovering another path. That is tram, walk, is that bad, is that good? Well, turns out that the cost of both of these paths are the same thing. So if I end up getting this path, well that's perfectly fine too. Right? Like that, that is also with the same optimal weight. In the example that we have shown, in a tram example, I don't think we are able to get to a path that looked like this because of the nature of the example. So, so in general things to remember from structured ce- perceptron is, it does converge. It does converge in a way that it can recover the two Ys, but it doesn't necessarily get the exact Ws, as we saw last time, right? Like, you might get two and four, you might get four and eight, like, as long as you have the same relationships, that, that is enough but, but you are going to be able to get the actual Ys, and it does converge. So with that, um, project conversation is going to be next time. Do take a look at, do take a look at the website. So all the information on the project is on the website. So if you have started thinking about it, look at the project page, and that has something for you. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Search_1_Dynamic_Programming_Uniform_Cost_Search_Stanford_CS221_AI_Autumn_2019.txt | Hi everyone, I'm Dorsa, uh, and this week I'll be teaching the state-based models and the plan is for the next couple of weeks for me to, to teach the state-based models MDPs, uh, and games and then and after that Percy, we'll come back and talk about the later, some of the later topics. So a few announcements. Uh, so homework 3 is out. So just make sure to look at that. And then the grades for homework 1 will be coming out soon. So just yeah, be aware of that. All right. So, so let's talk about state-based models, let's talk about search. So just to start, I was thinking maybe we can start with this question. Uh, if you can, let me reset this. So basically, okay, let me tell you what the question is and then think about it, and then after that I will get this working. So, so the question is you have a farmer and the farmer has a cabbage, a goat, and a wolf, and it's on one side of the river. Everything is on one side of the river. So you have this river. We have a farmer. We have the farmer with a cabbage, with a goat, and with a wolf, okay. And the farmer wants to go to the other side of the river and take everything with, with, with himself, um, and- but the thing is the farmer has a boat and in that boat can only fit two things. So the farmer can be in it with, with one of these other things, okay? So the question is how many crossings can, can the farmer do to take everything on the other side of the river? And there are a bunch of constraints, the constraint is if you leave the cabbage and goat together the goat is going to eat the cabbage. So you can't really do that. If you leave wolf with the goat, the wolf is going to eat the goat, you can't really do that. How many crossings should you take to take everything to the other side? Think about it, talk to your neighbors, I'll get this working. Everyone clear on the question? Okay. So the link doesn't work because, uh, we can't connect to Internet, but all right so. Okay. So how many people think it is four? Four crossings. Five, five crossings. Six, six. Some people think six. Seven? More people. No solution? No solution. Okay. So the point is actually not like what the answer is, we'll come back to this question and try to solve it, but I think the important points to, to think about right now is how you went about solving it. So, so what were you thinking and what was the process that you were thinking when you were trying to solve, solve this problem. And that is kind of the commonality that search problems have and, and we want to think about those types of problems where it's, it's more challenging to answer these types of questions and let's say reflex based type of questions. So, so that's kind of just a motivating example that we'll come back later. And here's an XKCD on this. So basically one potential solution is the farmer takes the goat, goes to the other side, comes back, takes the cabbage, goes to the other side and just leaves the wolf because why would he need a wolf, why would a farmer need a wolf. So [LAUGHTER] if you answered four, you probably were thinking about this. [LAUGHTER] And I guess it has like an interesting point in it because sometimes maybe you should change the problem. Your model is completely wrong. Maybe, maybe sometimes you should rethink and go back to your model and try to fix that. But anyways. So we'll come back to this question. So all right. So this was our guideline for the class, and, and we have already talked about the reflex-based model. So we have talked about machine learning and how that can get applied, and now we want to start talking about state-based models. This week, we're going to talk about search problems, next week, MDPs, and then the week after we're going to talk about games. If you remember the kind of the guideline that, that we had for the class was, uh, we were thinking about these three different paradigms of, of modeling, all right, we talked about this already. So modeling, inference, and learning. So for, for reflex-based models we talked about this already, right? So what would the model be, well, it can be a linear predictor or it can be a neural network. So, so that was a model. And then we talked about inference but in the case of reflex-based models it was really simple, it was just function evaluation. You had, you had your neural network and you would just go about evaluating it and that was inference. And we also spent some time talking about learning. So how would we use like let's say gradient descent to try to fit the parameters of the model, okay. So similar thing with search-based models. You want to talk about these three different paradigms that we have in the class, and, and the plan is to talk about models and inference today and then on Wednesday we'll talk about learning. We kind of have the same sort of format next week too. So we're going to start talking about modeling and inference on Mondays, Wednesdays are going to be about learning. So, so just to give you an idea of what the plan is. All right. So, so what are search problems? Let's start with a few motivating examples. So, so one potential example one can think of is, is route finding. So you might have a map and you want to go from point A to point B on the map, and you have an objective. So you want to maybe find the shortest path or the fastest path or most scenic path. That is your objective and the things you can do is you can take a bunch of actions. So you can do things like go straight, turn left, turn right, and then the answer for the search problem is going to be a sequence of actions. If, if you want to go from A to B with the shortest path, the answer that one would give is maybe turn right first and then turn left and then right again or any, any of these sequences. Okay so, so this is just a canonical example of what a search problem is. There are a few other examples. So for example you can think of robot, robot motion planning. So if you have a robot that wants to go from point A to point B, then it might want to have different objectives for doing that. So again the question might be what is the fastest way of doing it or what is the most energy efficient way of getting the robot to do that or, or what is the safest way of doing it. Like another question that we are interested in is what is the most expressive or, or legible way of robot doing it so, so people can understand what the robot really wants. So you might have again various types of objectives you can formalize that, and then the actions that, that you can take in the case of the robot motion planning is the robot is going to have different joints, and each one of the joints can translate and can rotate. So translation and rotation are the type of actions that you can take. So, so in this case I have a robot with seven, seven joints and then I need to tell what each one of those joints should do in terms of translation and rotation. That's your robot? This is my robot, yes. [LAUGHTER] It's a fetch robot. [LAUGHTER] All right. So, so let's look at another example. So games is, is a fun example. So you might, uh, think about something like Rubik's cube or, or this 15-puzzle, and again what do you wanna do as a search problem? Well, you wanna, you wanna end up in configuration that's desirable, right? So you wanna end up in a configuration where, where you have this type of ah, configuration on Rubik's cube or, or the 15 puzzle. So that, that is the goal, that's the objective. And then the action is you can move pieces around here. So, so the sequence of actions might be how you're moving these pieces around to get that particular configuration of the 15 puzzle, okay. So again another example of what a search problem is. Um, machine translation is, is an interesting one if it's not necessarily the most natural thing you might think about when you think about search problems, but what it is actually you can think about it as a search problem again. So imagine you have a phrase in a different language and you want to translate it to English. So what is the objective here? Well you can think of the objective as going to fluent English and preserving meaning. So, so that is the objective that one would have in machine translation. Um, and, and then the type of actions that you're taking is you're appending words. So you start with the and then you're appending blue to it and you're appending house to it. So, so as you're appending the- these different, different words, those are the actions that you're taking. So, so in some sense you can have any complex sequential task and, and the sequence of actions that you would get to get to your objective is there's going to be the answer for, for your search problem and you can pose it as a search problem, okay? All right. So, so what is different between let's say reflex-based models and, and search problems? So, so if you remember, reflex-based models the idea was you'd have an input x and then we wanted to find this f for example a classifier that, that would output something like, like this y which is labeled, it's a plus 1 or minus 1. So, so the common thing in, in these reflex-based models was we were outputting this, this one label, this one in this case action being minus 1 or plus 1. Again in search problems, the idea is I'm given an input, I'm given a state, and then given that I have that state, what I wanna output is a sequence of actions. So I do want to think about what happens if I take this action like how is that going to affect the future of my actions. Okay. So, so the key idea in search problems is you need to consider future consequences of, of the actions you take at the current state. Yes. Is this like not equivalent to like just outputting one thing and then like rerunning the function, on like the updated state? So if you rerun it. So, so the question is, yeah, is it not the same as like I'm rerunning it, I output a thing and then I rerun it again. And you could do that, but that ends up being a little bit of a- that would be some- similar to a greedy algorithm where like let's say I want to get to the door and I want to find, find the fastest way and right now if I just look at like my current state maybe I think the fastest way of getting there is going this way. But if I actually think about a horizon and I think about how this action is going to affect my future I might come up with a different sequence of actions. Okay? All right. Okay. So and, and you've already seen this paradigm so let's start talking about modeling and inference during this class. So this is the, the plan for today. So we're going to talk about three different algorithms for, for doing inference for search problems. So, so we're going to talk about tree search which is the most naive thing one could do to solve some of these search problems, but that's the simplest thing we can start with. And then after that you want to look at improvements of that doing dynamic programming or, or uniform cost search. So, um, the difference between search-based problem and reflex-based problem, the very fact that in a reflex-based problem, the output that you gave does not influence a string, and it doesn't search? Yeah. Tha- that's true. Yeah so, so the output that you get in search problem it is an action that actually influences your future. Yeah, that's a good way of actually thinking about it. Yes. All right. So, so let's talk about tree search. So let's go back to our favorite example. Um, okay so we have the farmer, cabbage, goat, and wolf. So let's think about all possible actions that one can take, when we have this farmer, cabbage, goat, and wolf. Okay. So, so a bunch of things we can do is a farmer can go to the other side of the river with the boat alone. So, uh, this triangle here just means like going to the other side of the, uh, the river. The farmer can take the cabbage. So C is for cabbage G is for, ah, goat, W is for wolf. So another possible action is the farmer takes a cabbage or the farmer takes the goat or the farmer takes a wolf and goes to the other side of the river. We also have a bunch of other actions. The farmer can come back. The farmer can come back with the cabbage, come back with the goat, come back with the wolf. So I'm basically numering- enumerating all possible actions that, that one could ever do. And sure none of- like not- some of these might not be possible in particular states but I'm just creating this library of actions things that are possible. Okay. So then when we think about the, ah, this as a search problem, we could create a search tree. Which, which basically starts from an initial state of where things are and then we can kind of think about where we could go from that initial state. So the search tree is more of, ah, what if- what if tree which, which allows you to think about what are the possible options that, that you can take. So, um, conceptually what- what it looks like is you're starting with your initial state, where everything is on one side of the river. So those two lines are the riv- the river the blue lines. Um, and you can take a bunch of actions, right like one possible action is you can take the cabbage and go to the other side of the river and you end up in that state. And that state is not a good state. I am making that red. Well, why is that. Because the wolf is going to eat the goat. That's not that great. Okay. Um, and, and every action, every crossing let's say ma- let's say every crossing takes cost of one. So that one that you see on the edge is the cost of that action. Okay. So that didn't really work that well. What else can I do? Well, I can, I can do another action. I can, I can- from the initial state, I can take the goat and go to the other side of the river, that ends up in this configuration. From there the farmer could come back, take the cabbage, go to the other side, end up in this configuration, the farmer can come back. That's again, not a great state because cabbage and goat are left on the other side of the river, goat is going to eat the cabbage. That's not great. What else can I do? Well, the farmer can come back with the goat. And then once the farmer comes back with the goat, the farmer leaves the goat, takes the wolf, goes to the other side, comes back gets the goat again. And then boom, you're done. Okay. So- so how many steps does this take? Well, one, two, three, four, five, six, and seven. So- so the ones who answer seven that was the right answer. Um, and that is kind of the idea of getting to this end state. Yes. So to be specifically, ah, not include the option that the going back to the previous state even though that's a valid next step just because we know that there's something- So you could have this giant tree where you go to different states but we can actually have like a counter that tells you if I have visited that state and if you have visited that state maybe you don't want to go there again because, because you have already explored all the possible actions from there. You're not done with this tree though, right? Like I've, I've found that this good state here, but maybe there's a better way of, like getting there. I don't know yet. I haven't explored everything. So, so what I can do is, I can actually explore all these other things that, that one could do. And I'm not gonna go over them. But there is another solution, and turns out that other solution also takes seven steps. So it's not necessarily a better solution, but, but you've got it for all of that because there could be another solution later on that. That is, uh, better than the seven steps. Okay. All right. Yes. Are these slides up? They are, they should be. Okay. Slides are up. Okay. Um, all right. So, so this is how the search tree looks like. Yeah. I'm just asking [inaudible] Oh, that's a very good point. Thank you for- [LAUGHTER] thank you, so for SCPD students I'll try to repeat the questions. I always forget this. Um, I'll try to repeat the question. The question was, ah, was the slides, uh, the slides aren't up, they're up, they should be up. So okay. All right. So, uh, going back to our search problem. Ah, so we can try to formalize this search problem. So, so let's actually think about it more formally. So what are the things that we need to keep track of. So, so we have a start state. So let's defined a start to be the start state. In addition to that we can, we can define this function called actions which returns all possible actions from states. So actions as a function of state. If I'm in a state, that basically tells me what are the actions I can take from there. I can, I can define this cost function. So this cost function, takes a state and action and tells me what is the cost of that and in this example, the cost of crossing the river was just one but you can imagine having different costs values. Ah, we can have a successor function that basically takes a state and action and, and tells us where we end up at. So if I'm in state S and I take action A where would I end up at? And that's the successor function. And then we're going to define an IsEnd function, which basically checks if you're in an end state where we don't have any other possible actions that you can take. Yes. So these are the [inaudible] I got a call? You can, you can think of it as, yeah, as a way of like finite state machine type of, type of, uh, way of looking at it. Yeah. So like we- we use a similar type of formalism, uh, for MVPs and games too. So this is good idea to get like all these formalisms right. But start state, transitions, costs. Those sort of things. Okay. Yes. What's the [inaudible] like [inaudible]. Ah, say it again so. A cost [inaudible] like. Cost? Position and action, and action already concerns the state. So then- so- so the action, okay, so action depends on state. So you start from start state where you haven't taken any actions right, and then from that start state then you can think about all possible like right up there. So you're under that start state, and there you can think about all possible actions you can take, and then those actions depend on current state but they don't depend on the future state, right. So based on like the current state, everything is on one side of the river. I can think about all possible actions I can take and where I know- where I end up at. And then, after that like the next action depends on that. Yeah, that's it. So it's a sequential thing. Okay. Yes. You have all the information on the actions and the cost that you could do beforehand, how is this conceptually different than like a min cost flow convex optimization? You can think of it. Okay. So- so how- how is it different from a kind of convex optimization type of role? So- so we have- we have an objective here and then you can think of what that objective is and based on what that objective is, we can have different methods for solving it, right? So- so you can basically formulate this as an optimization problem where you saw- you look for the solution to a search problem as an optimization problem too that's perfectly, a perfect way of doing it. And, and we're going to talk about various types of methods for- for solving this problem today. Okay. All right. So- so let's look at another example. So, um, this is, um, transportation problem. Now I'll just move this. So, um, okay. So basically, what we wanna do is we have street blocks from 1 through N. So 1, 2, 3, 4, so on. So these are street blocks and N is here. And what we wanna do is we basically want to travel from, from 1 to, to some N number. And we have two possible actions. So at any state, let's say I'm in state S. At any state, I can either walk, and if I walk I end up in S plus 1. So if I'm in 3, I'm going to end up in 4. And walking takes one minute. Or I can take this magic tram. And this magic tram, takes any state S to 2 times S. So if I'm in 3, then I am going to end up in 6 by taking the magic tram. And the magic tram always takes two minutes, doesn't matter from where to where. So, so if I'm in 2, I will end up in 4, if I'm in 5 I can end up in 10 by taking the tram. Okay. So, so I have two possible actions in any of these states. And what I want to do is, I want to go from 1 to N and then I want to basically do that in the shortest, uh, time possible. Okay. So with the- with the least amount of costs. That's the problem, makes sense? Okay. All right. So, so this is kind of like, what the search problem is. So what we wanna do is first off, you want to just formalize it. Uh, and I'm gonna do that here. I'm not gonna do live solutions because I'm not Percy, and I did that once and it was a disaster. So [LAUGHTER] we are going to, uh, yeah I taped these in 2018. Uh, but, uh, basically, we're going to go over it together. So, so let's just do that. Um, so we're going to define the search problem, this tram problem. So we're gonna define a class for transportation problems. So we're going to separate our search problems from our algorithms because remember modeling is separate from inference. So let's just have a constructor for this transportation problem. It takes N, because we have N blocks. Okay. So N is the number of blocks. Okay. All right. So, so then you have- we still have a start state. We're starting from 1 so block 1. And then we need to define IsEnd state. So IsEnd state basically checks if you've reached N or not. Because, because we have to get to the Nth block. Okay. All right. So what else do we need? So we have a successor function. We also have a cost function. I'm gonna put both of them together, because, because that is just easier. So the successor and cost function, I'm saying let's just give it state S. And then given a state it's going to return this triple of action, new state, cost. So I give it a state, let's say initial state, and then it just returns all possible actions, within new states I can end up at and how much does that cost. Okay/. So what are my options? Well, if I'm state S, I can walk to s plus 1 that costs 1. If I'm in state S, I can take the tram, I can end up in 2S, and that costs 2. Okay. So that's how I'm creating my triples. And, and I need to check if I don't pass the Nth block. Remember, like we have N blocks so we don't want to pass the Nth block. Okay. So, so that's just to make sure that we don't pass it. So we are still below the Nth block. And, and this is what my successor and cost function will return that, the triples. Okay. So let's just return that. Okay. So that is my transportation problem. Let's make sure it does the thing the way we want it. So let's say we have 10 blocks, and now I wanna print my transportation- my successor and costs function. Let's say I'm returning successor and cost for 3. What should I get? So from 3, I can have two actions, right. I can either walk or I can take the tram. If I walk, uh, it costs 1. If I take the tram, it costs 2. I'll end up in 4 or 6. Let's just try. I don't know 9. If I'm in state 9, I can only do one thing, I can walk, right? Because remember, the, the block is- number of blocks is 10 and I can't go beyond that. So- all right. Um, okay. So that was, um, [NOISE] yeah, let's go back here. So that was just defining, uh, the search problem, [NOISE] okay? And, and I haven't told you guys like how to solve it, right? This is- we are just doing the modeling right now. So we just modeled this problem. We just coded it up. Modeling it means, what is this- what are, what are the actions, what is a successor function, what is a cost function, defining an is end function, saying what, what the initial state is, okay? So, so now I think we are ready to think about the algorithms in terms of, like, going and solving these types of search problems, okay? So the simplest algorithm we want to talk about is, is backtracking search. So the idea of backtracking search is- maybe I can draw a tree here, is you're starting from an initial state and then you have a bunch of possible actions. And then you end up in some state and you have a bunch of other possible actions. [NOISE] Let's say you have two actions possible. And this can become- [NOISE] this exponentially blows up so I'm going to stop soon. [LAUGHTER] All right. So, so we create this tree and this tree has some branching factor. That's the number of actions you have at, at every, at every state. And then it also has some depth. [NOISE] So that is how many levels you go down. [NOISE] So let me just define that with D, okay? And now there are solutions down in these notes, right? So, so we wanna figure out what those solutions are. And backtracking search just does the simplest thing possible. What it does is, it starts from this initial state and it's going to go all the way down here. And if it doesn't find a solution, it's gonna go back here and then try again and try again. And it's gonna go over all of the tree because there might be a better solution down here too. So it needs to actually go over all of the tree, okay? So I'm gonna have a table of algorithms because we're gonna talk about a few of them here. Algorithms, [NOISE] what sort of costs they allow, in terms of time, how bad they are, in terms of space, how bad they are. So if you've taken an algorithms course, like, some of these are probably familiar. So, er, all right. So we talked about backtracking search, [NOISE] backtracking search. That is basically this algorithm that goes through pretty much everything, and it allows any type of cost. So I can have [NOISE] any cost, right? I can have pretty much any cost I want on these edges because I'm going over all of the tree. It doesn't matter what these costs are, okay? So, um, how-, how bad is this in terms of, in terms of time? So in terms of time, I'm going over the full tree. By going over the full tree, then, then this, this is going to have this exponential blowup where I'm looking at order of b to the d, where b is, again, my branching factor and d is the depth of the tree, okay? Cause in terms of time, this is not a good algorithm. Like, in terms of time, I have to go over everything in the tree. And that's the size of my tree, okay? And in terms of space, in terms of space, what I mean is, I need to figure out what was, what was the sequence of actions I needed to take to get to some solution. So let's say that my solution is down here. If my solution is down here, then for me, in or- like, I need to store a bunch of things to know how I got here, and the things I need to store are the appearance of this node and that is depth of D. So in terms of space, this algorithm takes order of D, okay? Because, because that is, like, the things that I need to store in my memory to be able to recover, like, the solution when I get there. Yes. [NOISE]. Question. Because we need to look at everything, shouldn't this space be big or here D to the D as well? Because until you get to that, you need, you need to have the space to have everything, right? You can prove that, but [NOISE] no. So actually, we'll talk about breadth-first search later, which does require you have a larger space. So, so the reason you can forget it is the only history that I need to keep track of is this particular branch, right? I don't need to figure out, like, I don't need to keep track of, like, actually the history of all these other nodes. I can, I can throw it- [NOISE] those out. But for something else like breadth-first search where we'll talk about in a few slides, you actually need to keep track of, like, the history of everything else. So, so let me get back to that in a few slides. But for this one, basically the idea is, um, yeah, like, I wanna know how I got there. To, to know how I got there, I just need to know the parents. Yes. [inaudible] like the minimum cost to reach a point or is it to find whether, like, you can or cannot reach a certain point in your search. So it depends on what your objective is. Like, it really depends on what the search problem is asking. So, so in the case of that farmer-goat example, uh, the search problem is asking, you wanna move everything to the other side of the river. So you have that criteria. And you wanna find the minimum cost one, so you also have that other cri- criteria. So it really depends on what the search problem is asking. And some of these nodes might be solutions. Some of them might not be solutions. So, so it really depends, okay? All right. So, so let's just look at these on the slide. So the memory is order of D. It's actually small. It's nice. In terms of time, this is not a great algorithm, right? Because even if your branching factor is 2, if the depth of the tree is 50, then this is gonna blow up, like, immediately. So a lot of these tree search algorithms that we're gonna talk about, like, they have the same problem. So, so they pretty much have the same time complexity. We're going to just look at very minimal improvements of them. And then after that, we'll talk about, uh, dynamic programming and uniform cost search, which are polynomial algorithms that are much better than these, okay? All right. So let's actually- let's go back to the tram example and let's try to write up what backtracking search does. So- all right. So we defined our model. Our model is the search problem, this particular transportation search problem. It could be anything else. Um, and now we're going to kind of have this main section wi- where we're going to put in, like, our algorithms in it. And we're gonna write them as general as possible so, so we can apply them to other types of search problems, okay? So let's define backtracking search. It takes a search problem. It can take the transportation problem, okay? All right. So- and then we're going to- basically in backtracking search, what we're doing is we're recursing on every state given that you have a history of, of getting there and the total cost that it took us to, to get there, okay? So, so at the state, having gotten some history and some accumulated costs so far, we are going to basically recurse on that state and look at the children of that state, okay? So, so we're going to explore the rest of the subtree from, from that particular state, okay? All right. So how do we do that? [NOISE] Well, we gotta make sure that we're not in an end state. Or if you're in an end state, like, we can actually update the best solution so far, okay? So let's put that for to do. So, so, so the bunch of things that we need to do. We need to figure out if you're in an end state. If we are, well, we got to, we gotta update our best solution. If you're not in an end-state, then we're going to recurse on children, okay? All right. So we can do that later. And then in general, this recurse function is, is going to, uh, we're going to call it on on the, on the start state. So let's actually do that too. So, so what backtracking search does is it calls this recurse function on the initial state that we have with history of none, right? Like, we don't have any history yet, and, and cost is 0 so far because we haven't really gone anywhere. So, so we start with a start state. We call recurse on it, okay? [NOISE] And how do we recurse on children? Well, we have defined this, this successor and cost function. So by calling that successor and cost function on state, then we can get action, new state, and cost. So, so we get this triple of action, new state, and cost, okay? And then we can basically recurse on the new state. Um, I'm not putting the histories right now in this code. So, so we need to keep track of the history too, but, but let's just not worry about the history. Oh, I guess I'm putting it in this one. [LAUGHTER]. In the later ones I will not put them. But, but basically the history is keeping track of, like, how you got there. And to- total cost is going to be [NOISE] what, what you've got so far plus the cost of this, this new state, action pair, okay? Okay. So we need to keep track of the best solution so far. So I'm just going to find a dictionary here just to make sure that we keep track of it and for Python scoping reasons. Okay. And then the place we're going to update our best solution so far is that to do that is left, right? So, so if you're in an end state, then we can actually update the best solution so far, okay? And what do we want in our best solution? Well, we wanna know what the cost is. So, so we can start with cost of infinity. And anything below infinity is better. [NOISE] And then we're going to start with a history of empty, but we're going to fill up that history too, okay? So that's the initialization of best solution so far. Then, we're going to update that, right? If you're in an end-state, if the total cost that we have right now is smaller than the best solution so far, then we're going to update that best solution. And, and you're going to update its history with whatever its history is, okay? All right. And, and that's it, that's backtracking search, okay? So let's just make sure it does the thing. So maybe- so to do that, [NOISE] we are going to- actually, no, we gotta return the best solution so far. Mm-hmm. All right. So now we have defined a transportation problem. Now, what I want to do is, I want to call backtracking search on the transportation problem, okay? So that all sounds good. I need to write a print function also to- to be able to print things. So I'm gonna just write a generic print function that we can call on any of these types of problems. So let's- let's define a print solution function that just like, prints things the way we want them. So we get the solution, and we're gonna just unpack that cost and history and just print the cost and history nicely. Okay. All right. So I can- I can use this print solution for pretty much all the other algorithms, we'll talk about today too. Okay. And we're gonna talk about how we get there- to the history. So now I have my print function, I have my backtracking search algorithm, I've defined my transportation problem. I can just call it on this transportation problem with 10 blocks. So as you guys can see here, so the total cost is 6. So what this means is for going from city 1 to city, city 10, then this is the best solution. I- I gotta walk walk, walk, walk, and then after that ta- take the tram. Because like I end up in 5, and then after that it's actually worth taking the tram and paying the cost 50. Um, let's try it out for 20. What do you think is the answer for 20? So [LAUGHTER] similar to before, walk, walk, walk until we get to 5, then we take the tram, then we take the tram again. The cost is 8. And then if, if it is 100, it's a little bit more interesting if you have 100. So you are walking and then you're taking the tram and you get to 24 and you what- you have that in one step to get to 25 which is the good state because then you can just multiply that by 2. So you walk for that one step and take the tram again, okay. So what if I want to try out a much larger number of blocks? So is this gonna work? No, because, because remember, that time was order of b to the d. That wasn't that great. So let's try that. Well, we got maximum recursion then, we can fix that. So [LAUGHTER] let's try fixing that. [LAUGHTER] So you can, you can set your recursion limit to be whatever. So you can try that. Is this gonna work? [LAUGHTER] Now, it's just gonna take a long time, right. So, so it's not going to give you an answer [LAUGHTER] And it's gonna just take a long time. So all right. [LAUGHTER] Actually, how do I view? Okay. Let's go back here. All right. So that was backtracking search, right? So all it was doing was just going over all of this tree and it was taking exponential time as you saw and we just tried it out on that transportation problem that we defined. So we just defined a search problem, we used this really simple search algorithm to find solutions for that, and- and then that's what we have so far. So, so now what we want to do is, we want to- we want to come up with a few better improvements of this backtracking search. Again, don't get your hopes up, it's not that big of an improvement. But, but we can do some- something better. So, so the first improvement you want to make is by using this algorithm called depth-first search, as some of you might have heard of it. DFS or depth-first search, okay? So the restriction that DFS put in, is, is that your cost has to be 0. So your cost has to be, let me leave that. Um, let me actually draw a line between them. So you don't get. Okay, so, so we are talking about DFS now, and the restriction is the cost has to be 0. So, so what DFS does, is it basically does exactly the same thing as backtracking search, but once it finds a solution down here then it is done. It basically doesn't like explore the rest of the tree. And the reason it can do that is the cost of all these edges is 0. So if the cost of all these edges are 0, then if I find a solution I found a solution. I don't need to like find this better solution. Because, because that, that is good enough like anything that I find also has a cost of 0, so I might as well just return the solution. Like, an example of that is if you have Rubik- Rubik's cube uh, like if you find a solution then you have found a solution, right? There are a million different ways of like getting to a solution, but like you just want one. And then if you find one, then you're happy, you're done. Okay. So as you can see, this is a very, very slight improvement to backtracking search. Um, what happens is in terms of, in terms of space it's still the same thing. So it's order of D. So in terms of space nothing has changed. It's pretty good, it's order of D. In terms of time, in practice it is better, right? Because in practice if I find a solution, I can just be done, don't worry about the rest of the tree. But, but in, in general, if you want to talk about it in theory then the worst case scenario is just trying out all of the trees, so you write it as worst case scenario, it's order of b to the d. So, so nothing has really changed in terms of- in terms of exponential blow up. Yes. I've been thinking of how you draw that tree, it seems that you imply that the sub problems do not overlap, right? Because you're kind of [inaudible] but in fact the sub-problem could overlap. So you- somebody with a training problem, you can get to the same place through different history but the rest is the same. Yeah, so you can- so, so the question is yeah, do sub-problems overlap here or they don't. So you could actually have it in a setting where sub-problems do overlap, but you could actually add this, this extra like constraint that says if I visited the state, then don't add it to the tree. So, so you have that option or you have the option of like going down to tree with some, like particular depths and not trying out everything. In the setting that we have here, yeah, like we're basically trying out all possible. Like, I'm talking about the most uh, like, general form where you're going over all the states and all possible actions that could come out of it, okay? All right. So that was DFS. Okay. So the idea of DFS again as you're doing backtracking search and then you're just stopping when you find a solution because- because cost is 0, okay? So in terms of s- space order of D, in terms of time, it's still order of b to the d, okay? All right. So that was DFS. We have another algorithm called breadth-first search BFS. And this is useful when cost is some constant but it doesn't need to be 0, it's just some, some, some positive constant. So what that means is all these edges have the same cost and that cost is just C. So I have the same cost pretty much everywhere, okay? So the idea of breadth-first search, is we can- we can go layer by layer. Like, like we're not going to try out the depth. Instead what we can do is, we can go layer by layer, try out this layer and see if we find a solution here. Remember the tree doesn't need to go all the way down here. The tree could end here or like at any of these and any of these nodes. Like, like I can have like a tree that looks maybe like this. I have a solution here. Like this tree doesn't need to be like this nicely formed. Like I can have a tree that looks like this, okay? So if I have a tree that looks like this, with breadth-first search, I'm gonna try out this layer. See if this guy is a solution. If it's not, I'm gonna try this guy, see if this is the solution. If not I'm gonna try here, here, and then when I find a solution when I get here, I'm done, right? Because like if I find a solution here, I know it took 2C to get here. Like two of these C values. And if there is any other solution anywhere else in this sub-tree or in this sub-tree, those solutions are going to be worse than this. Because they are gonna just like take like, they- they're going to have a higher cost, okay? So because the cost is constant throughout. Okay. So then it's, it's useful if your solutions are somewhere like high up in this tree and then you can find it. So in terms of time, I get some improvements here because I can call this depth, this shorter depth the small d. I'm gonna call this shorter depth small d. And in terms of time, it's still exponential but it's order of B to the small d. And this is actually a huge improvement, because if you think about it, the tree has exponentially become larger. So these like lower levels are a lot of things that you need to, you need to explore. If we have like branching factor of 10, the next layer has 100 things in it, right? So- so going down these layers is actually pretty bad. So, so the fact that with bre- breadth-first search I can improve the timing and, and limited to a particular depth, that's pretty good. Still exponential, but pretty good. Yes. [inaudible] negative cost at that point, you can also assume this is best solution. Yeah, you can assume that this is the best solution. Yeah, exactly. So you are assuming that there are no negative cost. So at this point, I know this is the best solution, I'm done. Like I call it and and I don't like explore anything else. The problem with breadth-first search is um, there's a question there, sorry. Are you also assuming all the costs are the same? Yeah, we're assuming all the costs are the same. Because maybe you like all the costs are 1, if- if I don't assume that, if all of these costs are 100 and then like there might be like some, some other like um. [inaudible]. Yeah, you need to explore the rest if they're not the same basically. That's what I mean. All right. So, so the the problem with BFS is, in terms of memory we are losing. In terms of memory, you need to actually keep track of the history of all these other, like all the nodes that you have explored so far. So uh, in terms of memory, this is going to be order of b to the d, kind of similar to the time. And, and the reason is, I have explored this guy. And then after exploring this guy, I need to still have like a history of where it's going to go, because next time around when I try out this layer, I need to know everything about this parent. And I,- like when I- when I explore here and this is not a solution, I need to store everything about this, because maybe I don't find a solution in this, in this level and I need to come down. And when I come down, I need to know everything about these nodes. So I need to actually store pretty much like everything about the tree until I find my solution. And then that's where you lose like in breadth-first search. In terms of space, it's not going to be that great. So in terms of space, it's now order of b to the d. It's a lot worse than what we've had. In terms of time, it is, it is better. It's still exponential, but it is better, okay? All right. Okay, so now um, let's talk about one more algorithm and then afterward we, we jump to dynamic programming. There is a question back there. One thing though, the small d can be the same as the big D, right? It can. Yeah. So, it is exponential. I agree. Small d can be the same as big D. But in practice, if small d is not the same as big D, we are- we are winning a lot because, because, yeah, these lower layers are so bad that, that people actually like to call it- call the fact that we, we are order of b to the small d rather than big D. Yes? Is there a reason for why DFS would be the worst case scenario for the time enough for DFS? Uh, so DFS needs to go all the way down to these lower, lower levels. But BFS can stop at every level because it's doing level by level. That can be the worst case scenario [inaudible]. Yeah. So the reason is- yeah, so like you were saying, okay, so in DFS we were also saving some time, right? Like why aren't we are calling that out. And then the reason is with DFS you still need to get to these like lower layers, and that is the, like, that is the place that you're losing on time. So, so the fact that you're still, like, losing on time and surely you haven't explored these other ones, but you have already got to these lower trees, like, so far, um, that's pretty bad. So, so that is why we are calling it order of b to the d in a worst case. Okay. All right. So this, this last algorithm I wanna to talk about is, is an idea that tries- it's a cool idea. It actually tries to combine the benefits of BFS and DFS. And, and this is called, uh, DFS Iterative Deepening. So what this algorithm does is it basically goes level by level, same as BFS, because then that way i- if you find a solution, you're done, everything is great, right? Uh, but what, what it does is for every level, it runs a full DFS. And, and it feels- it's like it's gonna take a long time. But, but it's actually good because, again, if you find your solution, like, early on, it doesn't matter that you have ran like a million DFSs so far. So, um, so it's kinda like an analogy of it is, is imagine that you have a dog, and that dog is DFS, and it's on a leash, and you have like a short leash. And when it is on that leash, it's going to do a DFS and try out and search all the space, and it doesn't find anything. So it comes back, and then you're going to extend the leash a little bit, and it's gonna do everything, and, like, search everything, and do a DFS. Comes back, doesn't find anything you extend the leash again. So, so that's the idea. Like extending the leash is this idea of extending your, your levels, okay? So, uh, so how does, how does DFS iterative deepening be? Yes? Um, if what we're looking for in following the tree is even worse [inaudible] Uh, say that again, say that. So if, if what we're looking for in following the tree, is that gonna be worse than- Yes, exactly. Yes, that's, that's okay. That's a good point. So the point is, uh, the, the point that, um, I mentioned is, if your solution is, like, here, you are screwed. It's worse than BFS or DFS, right? You're doing all these DFSs through like a bigger, like, higher-level BFS and you're- and, and it's, it's a terrible situation. But again, in practice, like, we are hoping the solutions are not gonna end up like down this tree. But yeah, if the solutions are down the tree, then you're not, like, winning anything by, by using DFS. What exactly, like what problems do you think DFS iterative deepening would be, like, useful? In general, if you- okay. So the question is, yeah, so what problems do we think DFS iterative deepening is useful? Uh, in general, if like, there are problems that I think BFS is going to be useful, usually, DFS iterative deepening is useful. The reason I would think that is, like, there is some structure about the problem that I would think I would find my solution earlier. So if I, if I have some reasons or some, some reasons about the problem, about the structure of the problem, and I think solutions are low depth, I should use some of these algorithms. And in DFS with iterative deepening in terms of space, it helps too, so might as well use that. All right. So, so in terms of space, it's going to be order of small d. So in terms of space order of small d. And then in terms of time, you'd get the same benefits of, uh, it gets the same benefits of, uh, BFS. So, so that's, that's nice. And then again, like, because it's has this BFS out of the loop, it has the same sort of constraint on the cost. That's gotta be a, uh, constant constraint that cost, right? So that is our table. And again, in looking at this table in terms of time, you're just not doing well, right? Like you have this exponential time algorithms here. And, um, we cou- could avoid the exponential space with using something like DFS iterative deepening. But still, this time thing is- it's just not that great, okay? And what we wanna do now is we wanna talk about search algorithms that bring down this exponential time to polynomial time somehow. And then there is no magic, we'll talk about how. [LAUGHTER] And dynamic programming is, is the first algorithm, okay? Yes? You might give us ideas b to the d time in term of d space. Uh, yeah. So it- so, so the way iterative deepening works is, it sets the lev- or say level is one. So if level is one, I'm gonna do a full DFS, okay? Because I'm doing a full DFS in terms of space, uh, I- it's the same as DFS in terms of space. I just- it's just the same as the length where we find a solution. Let's say the length where I find the solution is small d. So now, I say level is two, my new level is two, I'm gonna do a full DFS, okay? [NOISE] So when I do a full DFS, then in terms of space, I need to- I need to just remember my pairings, so that's why it's order of d in terms of space. And in terms of time, it's, it's order of b to the d because if I find my solution here, I'm done, I don't need to, like, explore anything else. And, and that is exponential but exponential in, in this smaller depth as opposed to the longer depth similar to, similar to BFS. Yes? I'm sorry. I still don't understand why, let's say, like, the small d is the same as the big D, right? And- That's a- okay. So that's a very good question. So you- I think I know it. So you're asking small d, if small d was the same as big D. If I had my solutions down here, why am I, like, differentiating here between a small d and big D, right? Is that what you're asking or am I- I'm just gonna ask if it's, like, the depth is quite large, like, small d is large, and why is it, like, why do we need to find also a function of d? As in why wouldn't it be, like, d times b to the d? Um, Oh, I see where you're saying. So, so you're saying, okay, like, when I'm doing, when I'm performing DFS iterative deepening, then I'm doing DF- DFSs. So sure, it's order of b to the d for each of them, but then I'm doing d of them. And if d is really large, I should put that here. Sure, I, I do agree that is the right time. But again, I'm- like, in, in, in the, in the case of this exponential, this is so bad that that we are just dropping that, like, we don't even worry about that, the extra d that comes in. But it is true, you need to have that extra d, like, in, in general if you want to talk about it. Kind of wanna move on to dynamic programming, but last question there. First of all, I'm after that, presumably though you're saving the work that you've done during the prior iterations, so you're not really computing anything larger than O to the B, capital D, correct? Yeah, that's right. The worst-case scenario is O to the B, capital D. All right. So let's move to dynamic programming. Okay. So, so what does dynamic programming do? So maybe I can- I'll, I'll still use this because I might need to use this thing later. Okay. So I'm gonna erase my parameters up on here. Okay. So the idea of dynamic programming, we have already seen this in the first lecture, is I have a state s, and I wanna end up in some end state. But to do that, I can take an action that takes me to S-prime, right? I can, I can end up in s-prime by cost of s and a. I can take an action that, that ends up in s-prime. And then from there, I can do a bunch of things. I don't know what. But I'll end up in some end state, okay? And, and what I'm interested in actually computing is for this state s is to find what is future cost of s, okay? And this part of it, is future cost of S prime and I don't know what it is but I can just leave it as future cost of S prime. So if I wanna find what future cost of S is, maybe I should make this a little bit to the right one cycle. I'm gonna write cost of s, a for this edge. I'm gonna erase this. What I'm interested in finding is future cost of my state S. So what is that equal to? Well, that's going to be equal to this cost of s, a. Right? Like a state S, I'm going to take action a. So it's going to be cost of s, a plus future cost of S-prime. Again, I don't know what that is but that's future Dorsa's problem. So this is future cost of S prime. And then you might ask well what is a? Where does a come from? How do I know what a is? I don't know. I'm gonna pick an a that minimizes this sum. I'm gonna put this around it. Okay? So future cost of S is just going to be equal to minimum of cost of s, a, plus future costs of S-prime over all possible actions. And it's going to be 0, if you are in an end state. If is End of S is true. Okay? So if I already know I'm in an end state, then there is no future cost. That's going to be equal to 0. Otherwise, future cost is just going to be, cost of going from S to the next state and then future cost computed from there. Okay? So that is just how one would go about formalizing this problem as a dynamic problem and they're not a dynamic programming problem, okay? And then how do I find what S prime is? Well, I wrote this successor and cost function [NOISE] in my code. Remember like we know how to find the successor given that we are in state S and we are taking action a. So S prime is just calling that successor function over s and a. All right. So let's go back to some route finding example. So, so this is slightly different route finding example. So let's say that we want to find the minimum cost path from going from city 1 to some city n in the future, moving forward, we can always just move forward and it costs c_ij to go from city i to city j. Okay? So this this is my new search problem. Okay? So, so this is kind of how the tree would look like. So, so if I wanna draw this research for this, I can start from city one, I can end up in a city two or three or four. Then if I'm in city two, I can end up in three or four. If I'm in three, I can end up in four like this is how it will look like. Ah, I can have a much larger version of it. If I'm talking about going to city seven, then I have this type of tree. And by just like looking at this tree, you see all these sub-trees just being repeated like throughout. If you just look at five like future cost of five, it's gonna be the same thing. Right? It's just gonna be the same thing throughout. And if I use like something like tree search that we have talked about, then I have to like go and explore like this whole tree and then it's gonna be really time-consuming. So, so the key insight here is future cost, this value of future cost, only depends on state. Okay? So it only depends on where I am right now. And because of that maybe I can just store that the first time that I compute future cost of five and then like in the future, I just called that and, and, and I don't like recompute future costs of five. Okay? So, so the observation here is, future cost only depends on current city. So, so my state in this case is current city and, and that state is enough for me to compute future cost. Okay? All right. So, so if you, if you think about what we have talked about so far, like we have thought about like these these search problems where the state we think of it as the past sequence of actions and the history of actions you have taken and all that. But right now for this problem, like state is just current city and that's enough. Okay? So and and because of that, you are getting all these exponential savings in time and space because again, I can compute future cost of five there and collapse that whole tree into this graph and just go about solving my search problem on this graph as opposed to that that whole tree. Right. So, so that's that's where you get the savings from, from dynamic programming. Um, and I just wanna emphasize that again of, let me actually do this. So, so the key idea here is, like I was saying there is no magic happening here. The key idea here is is how to figure out what your state is. It's actually important to think about what your state is. In this case we are, we're assuming a state is summary of all parts, all past actions that we've taken sufficient for us to choose the optimal future. Okay? So, so that's like a mouthful but ah, basically what that means is, the only reason dynamic programming works. And for this particular example we just saw, is the state the way we define it is enough for us to plan for the future. Like I might have a different problem where the state. Like I define a state in a way that it's not enough for me to do a plan for future. But if I wanna use dynamic programming, then I gotta be smart about choosing my state because, because that is the thing that, that decides for the future. So, so for example for this problem, like I might visit city one, then three, then four, and then six, and for solving this particular search problem, I just need to know that I'm in city six. That is enough. Okay? But like maybe I have some other problem that requires knowing one, three, four, and six and and because of that maybe I need to know the full tree. Okay? So so this is where the saving comes from like figuring out what the state is and and defining that. Right? All right. So so we will come back to this notion of state again and I think about the state a little bit more carefully. But maybe before that maybe we can just implement dynamic programming real quick. All right. So let's go back to our tram problem. I'm back to the tram problem and let's implement dynamic programming. Okay. So how do we do this? We're basically just writing that like math over there into code. That, that's all you're doing. So, so we're going to define this future cost. If you're in an end state, we're going to return 0. If you're not in an end state we're just going to add up cost plus future cost of S prime. How do we get S-prime? Well, we're gonna call this successor success and cost function. So we can get action new, new, new state and costs. And then you're gonna take the minimum of them over, over all possible actions. So minimum of cost plus future cost of new state. That is literally what we have on the board. Okay? All right. And we're returning the result. So that is future cost. What's your dynamic programming there? It should, it should return a future cost over initial state. Right? Start state. And you will return the history if you want. In this case, I'm not returning [LAUGHTER] the history. Okay. So how do I get savings? Well, I gotta put a cache. Right? That's the only way I'm gonna get savings. So um, that is where I put the cache. And if I, if the state is already in the cache. I'll just call my cache. Otherwise I don't. Any question there? [inaudible]. What's that? Are we getting future costs? How are we getting? Uh, say that again. Sorry, I didn't hear. So future cost takes some states, but what actually- is there like- uh, do we actually have, like, a function in the menu to calculate future costs or is that like [inaudible]. So future cost is going to be, uh- yeah, so, so we have this function, right? Future cost over state. But you're going to call future cost- so, so, so future cost over state is going to be equal to cost of state and actions, in this function I'm saying all possible actions, try that out, plus future costs of S prime. And S prime comes from the successor and, and, and cost function, uh, successor and cost function. All right. So- and then, yeah- and so, so we do the caching, the proper caching type of way of doing this too. And now we have dynamic programming. So we can basically call this over, uh, our tram problem. So I'm gonna, I'm gonna move forward. Okay. So let's do print solution, dynamic programming over our problem. Uh, you can, again, play around with this. The only way I'm checking this is if it gives me the same solution as backtracking search because I knew how that works, right? So let's just call it on ten. And, yeah, it gave me the same, the same answer. So I can play around with this, okay? All right. So, uh-huh, let's go back. Okay. So one assumption that we have here, to just point out, is we are assuming that this graph is going to be acyclic. So, so that's, that's an assumption that we need to make when we are solving this dynamic programming problem. And, and the reason is, [NOISE] well, we need to compute this future cost, right? For me to compute future costs of S, the S, S prime, I need to, like, have thought about- sorry. For me to compute future costs of S, I need to have thought about future costs of S prime. So there is, kind of, this natural ordering that exists between my state. So if I think about an example where there are cycles, then, then I don't have that ordering, right? If I want to compute, let's say, I want to go from A to D here, and on B, C. So if I want to compute future cost of B, I don't really know if I should have computed future costs of A before or C before or what order should I have gone to compute, like, future costs of B? So, so you actually need to have some way of ordering your states in order to compute these future costs and, and apply dynamic programming. So that's why, like, we can't really have cycles, like, when we, when we think about this algorithm. But we are going to talk about, uh, uniform cost search which actually allows us to have cycles, like, in a few slides. Yes. So when is the run time of the dynamic programming? So the run time of this is actually polynomial time in the order of states. So order of n. O of n? Yeah O of n, where n is the number of states. Yeah. Okay. All right. So- all right. So let's talk about the idea of states a little bit more because I think this is, this is actually interesting. All right. So, so let's just reiterate. What is a state? State is a summary of all past actions sufficient to choose future actions optimally, okay? So, so everyone happy with what state is? So now, what we want to do is, we want to figure out how we should define our state space. Because, again, this is an important problem, right? Like, how we we're defining state space is the thing that gets the dynamic programming working. So, so we got to, we got to think about how to do that. So, so let's go back to this example, and let's just change that a little bit. So, so this is the same example of, I'm going from city one to city n, I can only move forward, and it cost C_i_j to go from any city i to city j, and I'm going to add a constraint. And the constraint is, I can't visit three odd cities in a row, okay? So what that means is, um, [NOISE] maybe I'm in state one. And then, I went to state three, or city one, I went to city three. And then after that, can I go to city seven or- no, based on this constraint that I've added, I, I, like, can't do that, right? So I want to define a state space that allows me to keep track of these things, so I can solve this new search problem with this new constraint. So, so how should I, how should I do that? [NOISE] So in, in the previous problem, when we didn't have the constraint, our state was just a current city. Like previously, we just cared about the current city. And the reason we cared about the current city is like, is like we are solving the search problem, like, we end up in a city. We need to know how I'm going- where I should go from three. So I should, I should have my current city in general, right? So, so for the previous problem without the constraint, current city was enough. But, but now current city is not enough, right? I actually need to know, like, something about my past, okay? Yes. [inaudible] have a count of how many that's odd states. Yeah. That's actually a very good point. [NOISE] Yeah. And so, so one suggestion is, have a count of how many odd states. Not only maybe, like- and the- maybe the first thing that would come to our mind is something simpler. So maybe we say, well, the state is- maybe I'll write previous city just to be similar to the slide. The state- like, when we say, well, the state is previous city and current city. Okay? So this is one possible option for, for my state, right? Because, because if I have this, if I have this guy as my state, and then that is enough, right? Like if I- my current city is three, I know my previous city was one. I know I shouldn't go to seven, like that's enough for me to make, like, future decisions, okay? But there is a problem with this. Well, what is the problem? So I have n cities, right? So, so current city can take n possible action and n possible states, previous city can also take n possible options, has n possible options. So if I think about the size of my state space, it is n squared. If I decide to choose the state, okay? If I, if I decide to choose the state, I'm going to have n squared states. And remember, we are doing this dynamic programming thing, like, we need to actually, like, write down, like, all the- like, how to get from all those states. That's gonna be big. But there is an improvement to this. And that's an improvement that you suggested, which is, I don't actually need to have this whole giant previous city which has n options. I can just have a counter to just know whether the previous city was odd or not. Like, that's enough, right? Like if I- I don't care if it was one or three or whatever. Like, I just care to know if previous city was odd or not. So, so another option for- I'll write it here. Another option for my state is to know if previous was odd or not, okay? And then I need to know my current city again, right? Current city we need that because, like, we need to know how to get from there. And then this brings down my state space, like, how does it bring down my state space? Because, well, what's the size of my state space? This guy can take n possible, uh, states. If my previous city was odd, that's two, right? Like, so I just brought down my state space from something that was n squared to 2n, and, and that's a good improvement. So in general, when you're picking these state spaces, you should pick the minimal, like, sufficient thing for you to make decisions. So it's got to be a summary of all the previous actions and previous things that you need to make future decisions, but pick the minimum one because you're storing these things, and it, it actually matters to pick the smallest one. So, so here is an example of, like, exactly that. So, so my state is now this tuple of whether the previous city was odd or not, and my current city. So if I start at city 1, well, like, I don't have a previous city, and I'm at city one, I could go to city three, and I end up in odd and three. I could try to go to city seven, well, that's not possible because now I have listed three states, and, and I end up here, and there are, like, the rest of the tree, you can have any other examples. Yeah. [inaudible]. So, so the way I'm counting this is, how my- so, so my state is a tuple of two things, right? If the previous city is odd or even, I have two options here. It's either odd or even, that's two. And then my current city. And I have n possible options for my current city. It could be city one, city two, city three, so that's n. So I have n options here. I have two options here. That's why I'm saying my whole state space is two times n, okay? All right. Okay. So let's try out this example. Let's not put it in. Uh, just talk to your neighbors about this, and then maybe, if you have ideas just let me know in a minute. So- okay. So what is the difference here? So we're traveling from city one to city n, and then the constraint is changed. Now, we want to visit at least three odd cities. So that's what we wanna do. And then the question is, what is the minimal state? Talk to your neighbors. [NOISE] All right. Any ideas? Any ideas? [BACKGROUND] What is a possible state? Like it- don't worry about the minimal even, like for now. Like what do I need to keep track of? Number of odd cities. Number of, number of odd cities? Yeah. Okay. So- and is that it? Do I need to just know the number of odds cities? Um, or number of odd is about your, uh, [OVERLAPPING] So number- so, so what I meant is I also need to have current city, right? So, okay. So one possible option for this new example, I'm gonna write that here, is I want to visit at least three odd cities, I also need my- to know my current city, for any of these types- like, not any of these types of problems, for these particular problems that I've defined here, I need to know where I am. So I need to know what my current city is. So- so that is, like, that is given what I need to have that, okay? So I want to see at least three odd cities. So one possible option is to just have a counter and keep counting number of odd cities, okay? So this could be one potential state, okay? Yes? Do the cities have to be different or it could be one, three, one? So, um, okay, so the question is do the cities need to be different? The way we are defining the problem is we are moving forward. If I'm in one, like, I can just just move forward. I can't like stay at one or I can't, like, go back. So- so we're always moving forward. But when we talk about the- the state space, we are talking about the more general, like, setting. Like, some- some of that 2N might not even be possible, but- but that's the way we are counting, okay? All right. So- so this is one option, but I can actually do better than this. Yes? [inaudible] you need at least three odd cities, and then you need at least two odd cities, then you need at least one odd city and then you're- And then you're done. Right. So- so a suggestion there is we can- we can have, like, you can- you can start, like, saying you need at least three odd cities, then you need at least two odd cities, then you need at least one- one odd city and then you're done. And one way of formalizing that, that's exactly right, right? I only care if I have four odd cities now, or five odd cities, like, as long as I have like above three, that's- that's good enough, right? One odd city, two odd city, three odd city, above that is just three plus, like- like that's enough for me, okay? So if I have this, then the state space here is going to be N options here, and number of odd cities, it's around N over 2, so it's going to be N squared over 2. But if I use this- this new suggestion, where I don't keep track of four, five, six, seven, I just keep track of one, two, and three plus, then my state space ends up becoming 3 times N, and I- I can formally write that as S is equal to minimum of number of odd cities, and three, and then current city, you need the current city. And with this state space, then the size is equal to 3N, okay? So I just, again, brought down N squared to N, and that's- that's a nice improvement. Yes? Do you not also need an option for zero odd cities specific to [inaudible] Zero. We're starting from city one, so we're already counting that in, but yeah, like, if you have zero odd cities, that is a good point too. All right. So I've gotta move. Okay, so, um, that was that. This is how it looks like. Like you can think of your state space like this again as a tuple of I visited one, two, three, and- and then the cities. I have another example here, you can think about this later and yeah, like, work, work it at home. But, uh, basically the question is, again, you're going from city one to N, and you want to visit more odd cities than even cities. What would be the minimal state space? But we can talk about it offline. So the summary so far, is- is that state is going to be a summary of past actions sufficient to choose future actions optimally. And then dynamic programming, it's not doing any magic, right, it's using this notion of state to bring down this exponential time algorithm to a polynomial time algorithm, and then, with the trick of using memoization, and with a trick of choosing the right state, okay? And we have talked about dynamic programming and how it doesn't work for acyclic graphs. And now, we want to spend a little bit of time talking about uniform cost search, uh, and how that can help with the- with the cycles. So if you guys have seen Dijkstra's algorithm, this is very similar to Dijkstra's, like, yeah. So- so it's basically Dijkstra's. But- all right. So let's- let's actually talk about this. So- so the observation here is that when we- when we think about the cost of getting from start state to some s prime, well, that is going to be equal to cost of going from s to s prime and then some past cost of s, okay. And then when dynamic programming, let's make sure that we have this ordering and these things are computed in order, so we're not worried about, like, visiting the state, like, multiple times. But- but in- in uniform cost search, we might visit a state multiple times, and if you have cycles, we don't know what order to go. But the order we can go is we can actually compute a past cost- a suggested past cost, and- and basically, go over the states based on increasing past cost, okay? So, um, let me actually- yeah, so- so uniform cost search, what it does is it enumerates states in an order of increasing past cost. So- and- and in this case, we need to actually make an assumption here, we need to assume that the- the cost is going to be non-negative. So- so I'm making this assumption for uniform cost search. So here is an example of uniform cost search running- oh, we don't have internet, I just- yeah, there is a video of uniform cost search running in action. If I have time, I'll connect to internet and get it working. But- so- so let's talk about the high level idea of uniform cost search. So in uniform cost search, we have three sets that we need to keep track of. One is explored set, which is the states that we have found the optimal path. These are the states that we are sure, like, how to get to, we have computed the best path possible to get there, we are, like, done with them, okay? Then we have another set called a frontier, where this frontier are the states that we have seen, we have computed like a cost of getting there, like we know, somehow, how to get there and what would be the cost, but we're just not sure about it, like, like, we're not sure if that was the best way of getting there, okay? So- so the frontier, you can think of it as a known unknown. I know they exist, but, like, I actually, I'm not sure what's the optimal way of getting there. And then finally, we have this unexplored part of states. And these unexplored part of states, I haven't even seen them yet, I- I don't even know how to get there, and you can think of it as more of an unknown unknown. So- so that's, like, how you would think about these three. So let's actually work out an example for uniform cost search. I'm actually going to do this one. So- so I'm just gonna show how uniform cost search runs on this example. So I said we are going to keep track of three sets: unexplored, frontier, and then explored. Explored. Okay? All right. So everything ends up in unexplored at the beginning, A, B, C, and D. And what I wanna do is I wanna go from A to D, that- that's what I wanna do, okay? So I wanna find the minimum path cost- path- minimum cost path to get from A to D, given that I have this graph, okay? So what I'm gonna do is I'm gonna take my initial state, that's A. I am going to put A on my frontier, and it costs zero to get to A because I'm just starting at A, okay? So that's on my frontier, then in the next step, what I'm gonna do is I'm going to pop off the thing with the lowest cost from my frontier. There's one thing on my frontier, I'm just gonna pop off that one thing off my frontier, I'm gonna put that to explored, the cost of getting to A is 0. And then, what I'm going to do is after popping it off from my frontier is, I'm gonna see how I can get from A to any other state. So from A, I can get to B, that's one option, and with the cost of 1. So from A, I can go to B with a cost of 1. Where else can I go? I can go to C with a cost of 100. Okay? So what I just did is I moved B from unexplored to frontier, and then I- I know how I- to get there from A, and I moved C to the frontier, and I know how to get from there. Okay? So now it's the next round, I'm looking at my frontier, A is not on my frontier anymore, it's in explored. And I'm going to pop off the thing with the best cost off my frontier. Well, what is that? That's B. So I'm going to move B to my explored. The way- the best way to get to B, I already know that, right? That's from A to B. Everything is good. Okay? So now that I've popped off B from my frontier, I'm gonna look at B and see what states I can get to from B. From B, I can go to A, but A is already in explored, like, I already know the best way to get to A, so- so there is no reason to do that. From B, I can get to C, and if I want to get to C, then I can actually get to C with the cost of 1 plus whatever cost of B is already, 1. So what I'm gonna do is I'm going to erase this, because there is a better way of getting there, and that's from B, okay? And then, from B, I can get to D. So I'm gonna move D from unexplored to frontier. I can get to it from B. And then, how do I get to it from B? There's a cost of 101, right? Because 100 plus cost of getting to that, okay? All right. So I'm- I'm done exploring everything I can do from B. Going back to my frontier again. So these two are not on my frontier. I just have C and D on my frontier. I'm gonna pop off the thing with the best cost, that is C. I'm gonna move that to explored with a cost of two, and the way to- the best way to get that is from B, okay? So we're done with C. And then, we're gonna see where we can go from C. From C, I can go to A. Well, that's done, that's already on the explored- in- in the explored set, I'm not gonna touch that. Similar thing with B, already in the explored, don't need to worry about that. From C, I can get to D, right? And if I want to get to D from C, well, what would be the cost of that? It would be 2 plus 1. So I can update this and have 3. And I can update the way to get to D from here. And then, we're done, we go to frontier. The only thing that's left on the frontier is- is D. I'm going to just pop that off, and then I'm going to add that to explored. And that is 3. And that's what I have in my explored. So the way to get from A to D is- is by taking this route, and it costs 1. So A, B, C, and D. Okay? Is that- is that clear? All right. Okay. So there are two slides left and they're probably gonna kick us out soon, so I'll do this next time. So- so yeah, the two- two slides left is one is going to just go over the- the pseudo-code. So take a look at that, the code is online. And there's a small theorem that says, this is actually doing the right thing. I'll talk about that next time. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Search_2_A_Stanford_CS221_Artificial_Intelligence_Autumn_2019.txt | Okay. So, Hi, everyone. So, uh, our plan for today is to continue talking about search. So, so that's, uh, what we're going to start doing, finish off some of the stuff we started talking about last time, and then after that, uh, switch to some of the more interesting topics like learning. So a few announcements. Um, so the solutions to the old exams are online now. So if you guys wanna start studying for the exam, you can do that. So, so start looking at some of those problems, I think, that would be useful. Um, actually, let me start with the Search 2 lecture because I think that might be, like, that has a, a review of some of the topics we've talked about. So it might be easier to do that. Also, I'm not connected to the network, so we're not gonna do the questions, uh, or show the videos because I have, I have a hard time connecting to the network in this room. Okay. All right. So, so let's start- continue talking about search. Uh, so if you guys remember, uh, we had this, this city block problem. So let's go back to that problem and let's just try to do a review of some of the, some of the search, search algorithms we talked about last time. So, uh, so suppose you want to travel from City 1 to City n only going forward, and then from City n you wanna go backwards, so and back to City 1 going only backwards, okay? So, so you- so the problem statement is kind of like this. You're starting in City 1, you're going- you're going forward and you're getting to some City n. So maybe we're doing that on this. And then after that, you wanna go backwards and get to, get to City 1 again. So you go into some of these cities, okay? So, so that's the goal, and then the cost of going to- from any city i to city j is equal to cij, okay? So, so that's it. So, the question is: What- which one of these following algorithms could you use to solve this problem? And it could be multiple of them. So- so we have depth-first search, breadth-first search, dynamic programming, and uniform cost search. And these were the algorithms we talked about last time. So, uh, maybe just talk to your neighbors for a minute and then we can do votes on each one of these. Yes, question? Just needed to ask [inaudible] The [OVERLAPPING]? Okay. Let me check that again. Thank you. Thank you for. [BACKGROUND] All right, so let's maybe start talking about this. So how about depth-first search like how many people think we can use depth-first search? How many people think we can't use depth-first search? There's -a very like good split. [LAUGHTER] So, some of the people think we can't use depth-first search, what, what are some reasons maybe just like call it out. The depth first-search, the assumption was that based upon the cost is zero. Yes, that's right. Yeah, so here we are basically going from City 1 to city n. Each one of these edges had a cost of cij. I'm just saying cij is greater than or equal to 0. That's the only thing I'm saying about cij. But if you remember depth-first search, you really wanted the cost to just be equal to 0 because if you remember that whole tree, like the whole point of depth-first search was I could just stop whenever I could find a solution. And we were assuming that the costs of all the edges is just equal to zero. So we can't really use depth first search here, because, because our cost is not 0. So assuming, like now that you know that reasoning, how about breadth first-search? Can we use breadth-first search? Yes? All of that moving from one city to city n that is not the city n. So that's a good point. So, so what suggesting is can we think about the problem as going from City 1 to City n? And then after that, like introduce like a whole new problem that continues that and starts from City n and goes to City 1. Let me get back to that point like in a second, because like you could potentially think about that -actually like that might be an interesting way of thinking about it. But, but irrespective of that I can't use depth first-search. So I'm -so far I'm just talking about depth first-search. Irrespective of how I'm looking at the problem, the costs are gonna be uh, non-zero. So because the costs are going to be non-zero, I can't use depth-first search. So, so let's talk about that first. So how about breadth first-search? Can I use breadth-first search? [inaudible] That's exactly right. So we cannot use breadth-first search here because for breadth first-search. If you remember, you really wanted all the costs to be the same. They didn't need to be 0, but they needed to be the same thing because then you could just go over the levels. And here I'm not- like I'm not saying I'm not putting any restrictions on cij being the same thing. Okay? So now let's talk about dynamic programming. How about dynamic programming? Can we use dynamic programming? All right, so that looks right, right you like we could use dynamic programming here. Everything looks okay, cij's are positive, looks fine. Um, how about, um, actually one question? So, so don't they have cycles here? We kind of, briefly talked about this already. So, don't I have like this cycle here? Uh, we can think about possibly going from one to n and then n to one. Yes, so this is a suggestion that, that we have already like heard twice. So we could actually use dynamic programming here even if it kinda looks like we have a cycle and the reasons we can kinda use this trick were we can basically draw this out again. And for going forward basically go all the way here, and then after that we're going backwards, kind of include the directionality too. So all I'm doing is I'm extending the state, the state space to not just be the city but be the city in addition to that, it would be direction that we're going. So if I'm in City 4 here, it's City 4 going forward. And if at some point in the future I'm in City, I don't know, 4 again, it's City 4 going backwards. So I'll keep track of both the city and the directionality. And when I do that then I'm kind of breaking the cycle. Like I'm not putting any cycles here and I can actually use dynamic programming, okay? Does that make sense? And then uniform cost search. That, that also sounds good too, right? Like Uniform cost search, you could actually use that. Doesn't matter if you have cycles or not. And then we have positive, positive, non-negative costs. So we could use uniform cost search. Okay? All right, so this was just a quick review of some of the things we talked about last time. And, um, another thing we talked about last time was this notion of state. Okay, so, so we started talking about tree search algorithms and at some point, uh, we switched to dynamic programming and uniform cost search where we are, uh, like we don't need to- like we don't need to have this exponential blow up. And the reason behind that was we have memoization. And in addition to that we have this notion of state. Okay? And so, what is a state? A state is a summary of all past actions that are sufficient for us to choose the future optimally. So, so we need to be really careful about choosing our state. So in this previous question, uh, we looked at past actions. So if you look at like all cities that you go over it can be in City 1, then 3, then 4, 5, 6 and city 3 again. So in terms of state, the things that you wanna keep track of is what city you are in. But in addition to that, you wanna have the directionality because you, you need to know like where you are and how you're getting back. Okay? So, and we did a couple of examples around that trying to figure out what is, what is like a specific notion of state for various problems. All right. So, so we started last time talking about search problems and, and we started formalizing it. So if you remember our paradigm of modeling and inference and learning we started kind of modeling search problems using this formalism where we defined a starting state, that's s start. And then we talked about the actions of s, which is a function over our states which returns all possible actions. And then we talked about the cost function. So the cost function can take a state and action and tell us what is the cost of that, that, that, that edge. And then we talked about the successor function which takes a state and action and tells us where we end up at. And again, we had this end function that was just checking if you're in an end state or not. So these were all the things that we needed to, to define a search problem and we kind of tried that and a couple of examples to try an example. The City example, all of that. Okay? And then after talking about these, these different ways of, um, thinking about search problems, um, we started talking about various types of inference algorithms. So we talked about tree search. So depth first search, breadth first search, depth first search with iterative deepening, um, backtracking search. And then after that we talked about some of these graph search type algorithms like, uniform cost search an- and, uh, dynamic programming. So last time we did an example of, um, uniform cost search but we didn't get to prove the correctness of it. So I want to switch to some of the last, er, last, last time's, um, slides to just go over this, this quick theorem and then after that just switch back to, to this lecture. Okay. So uniform cost search. Like, if you remember what we were doing in uniform cost search, we had three different sets. We had an export set which was basically the set of states that we have visited, and we are sure how to get to them, and we know the optimal path, and we know everything about them. We had this frontier set which was a set with, with a set of states that we have got to them, but we're not sure if, if the cost that we have the best cost, cost. There might be a better way of getting to them and you don't know it. Like you're not sure yet. And then we have the unexplored, er, set of states which are basically states that we haven't seen yet. So we did this example where we started with all the states in the unexplored set and then we moved into the frontier and then from the frontier, we move them to the explored set. So, so this was the example that we did on the board. Okay? And, and we realized that, like, even if we have cycles, we can actually do this algorithm and then we, we ended up finding the best path being from A to B to C to D and that costs 3. So, uh, let's actually implement uniform cost search, uh, so I think we didn't do this last time. So going back to, um, our set of, ah, so, so we started writing up these algorithms for search problems. So we have, we have written dynamic programming already and backtracking search. So now we can, we can try to kind of implement uniform cost search. And for doing so, we need to have this priority queue data structure. So this is in a util file. I'm just showing you what it like what functions it has, it has an update function, and it has a remove min function. So, so it's just a data structure that I'm gonna use for my frontier. Because like, my frontier I'm popping off things off my frontier. So I'm going to use this data structure. All right. So let's go back to uniform cost search. So we're going to define this frontier, where we are adding states to- from unexplored sets, you're adding states to the frontier. Okay? And it's going to be a priority queue so, so we have that data structure because we've just imported util. And you're going to basically add the start state with a cost of 0 to the frontier. So that's the first thing we do. And then after that, like, while the frontier is not empty. So while true, what we're going to do is, uh, we're going to remove the minimum, uh, past cost element from the frontier. So, so basically just pop off the frontier that the best thing that exists there, and just move that to the explored set. Okay. So when I pop off the thing from the frontier, basically I get this past cost and I get the state. Okay? All right. So, so if, if, er, you're in an end-state, then you're just going to return that past cost with the history. I'm not putting the history here for now, I'm just returning the cost. Okay. So after popping off this state from the frontier, the thing we were doing was you were adding the children of that. So, um, the way we do that is we're gonna use this successor and cost function that we defined last time. So we can basically iterate over action new state and costs and this successor and cost function. And, and basically update our frontier, by adding these new states to it. Okay. And then the cost that you are going to add is cost plus past cost if, if that is better. So, um, so that's what the update function of the frontier does. And that's pretty much it. Like that is uniform cost search. You add stuff to the frontier, you pop off stuff from the frontier. And, and that way you explore and remove things from unexplored set to the explored set. So let's just try that out. Looks like it is doing the right thing. So it got the same value as dynamic programming. So, er, looks like it kinda works okay. [NOISE] So, um, this code is also online. So if you want to take a look at it, um, later, actually it's not what I wanted. Um, yeah. Okay. All right. So, so that was- and here's also the pseudo-code of uniform cost search. Okay? Okay. So we have- is there a question right there? What's the runtime of uniform cost search [inaudible]. That's a good point. So so what's- the question is what's the runtime of uniform cost search? So the runtime of uniform cost search is order of n log n, where the log n is because of, like, the bookkeeping of, of the priority queue, uh, and you're going over all the edges. So, so if you can think of n here as the edges and worst-case scenario if you have a fully connected graph, it's technically n squared log n. But in practice, er, we have [inaudible] graph so people usually refer to that just n log n where n is the number of states that you have explored. And it's actually not all of the states. It's the states that you have explored. Okay? And dynamic programming, it's order of n. So technically, like, dynamic programming is slightly better but really depends. Yeah, certainty. Actually go first and then I'll get you back. Is the only difference between this and Dijkstra's is that you just don't have all [inaudible] beginning? That wasn't- the question is what's the difference between this and Dijkstra's algorithm, they're very similar, the only difference is, this is trying to solve a search problem. So you're not like exploring all the states. When you get to the solution, you get to the solution and then you just return that Dijkstra, you're going from- you're basically exploring all of, all of the states in the- in your graph. What's your question? [inaudible]. All right. Sounds good. Okay. So, uh, I just want to quickly, er, talk about this correctness theorem. So, so for uniform cost search we actually have a correctness theorem which basically says uniform cost search does the right thing. So, uh, what basically this theorem says is, if you have a state that you are popping off the frontier and removing it from the frontier to the explored, then it's priority, that value which is equal to past cost of s is actually the minimum cost of getting to, to, to the state s. So what this is saying is, let's say that this is my explored set. So this is my explored set, and then right here is my frontier, and I have a start state, okay? And then I have some state s, that right now I have decided that I am popping off s from the frontier to explored because that is the best thing that has the best past cost. So what the theorem says is, this, this path that I have from s_start to s, is the shortest path possible to get to get to the state s. Okay. So the way to prove that is to show that the cost of this path is lower than any other path, paths that go from s_start to s. So let's say there is some other path, this green one, that goes from s_start to s some other way. And, and the way that it goes to s is it should probably leave the, the explored set of states from some state called t maybe to some goes- go to some other state u and then from u go to s. u and s can be the same thing. But what the point of it is, if I have this other path that goes through- to s, it needs to leave the explored set from some state t. Okay. So what I want to show is I want to show that the, the cost of the green line, I want to show that that is greater than the cost of the black line. Okay. All right. So the cost of the green line, what is the cost of the green line? It's gonna be the cost to here, and then cost of t to u, and the cost of u to s. So I can say well, this cost is actually greater than or equal to, um, priority of t, because that is the cost of getting to t, plus cost of t to u. And I'm just dropping this, this last part. The u to s, I'm just dropping it. Okay. So cost of green is at least equal to priority of t plus cost of t. t, t to u. Okay. Well, what does that equal to? Priority is just a number, right? It's just a number that you are getting off the, the, the, priority queue. So that is actually equal to past cost of T, plus cost of t to u. Okay. And, and this value is going to actually be greater than or equal to priority of u. Well, why is that? Because if u is in my frontier, I've, I've visited u. So I already have some priority value for u. And, and the value that I've assigned for the priority of u, is either equal to this past cost of t plus cost of t, t to u, because I've like, seen that using my explored, using my frontier. So I've definitely seen this or it is something better that, that I don't know what it is. Right? So, so priority of u is going to be less than or equal to this past cost of t plus cost of t to u. Okay. And well, what do I know in terms of priority of u and priority of s? Well, I know priority of u is going to be greater than or equal to priority of s. Well, why is that? Because I already know I'm popping off s next, I'm not popping off U, like, like I've- I know I'm popping off the, the thing that has the least amount of priority, and the least value here, and that's s, and well, that is equal to, er, cost of the black line, black line. Okay. All right. So that was just a quick, like proof of why the uniform cost search always returns kind of the best minimum cost path type [NOISE]. All right. So let's go to the slides again. So, um, just a comparison, quick comparison between dynamic programming of uniform cost search. So, uh, we talked about dynamic programming. We know it doesn't allow cycles, but in terms of, uh, action cost, it can be anything like, like you can have negative costs, you can have positive costs. And, er, in terms of, um, complexity is order of n, and then uniform cost search, you can have cycles. So that is cool. But the problem is, the costs need to be non-negative, and into order of n log n. And if you have- if you end up in a situation where you have cycles and your costs are actually negative, there is this other algorithm called Bellman-Ford, that we are not talking about in this class, but you could actually like have a different algorithm that addresses those sort of the things. Okay. All right, how am I doing on time? Okay. So that was, that was this idea of inference. Right now we have like a good series of ways of going about doing inference, uh, for search problems, you have to formalize them. And now the plan for this lecture is, is to think about learning. So how are we going to go about learning when we have search problems? [NOISE] And when our search problem is not fully specified, and there are things in the search problems that are not specified and you want to learn what they are, like the costs, okay. So, uh, so that's going to be the first part of the lecture, and then towards the end of the lecture, we're going to talk about a few other algorithms that make things faster. So, so smarter ways of making things faster. We're going to talk about A star and some sort of relaxation type strategies, okay. All right. So, um, so let's go back to our transportation problem. So, so this was our transportation problem where, er, we had a start state and we can either walk, and by walking we can go from state s to state s plus 1, and that costs one, or we can take a tram, a magic tram that takes us from state s to state 2s, and that costs 2, okay, and we want to get to state n. So, uh, we can formalize that as a search problem. We can like we saw it- we saw this last time, we can actually try to find what is the best path to get from state 1 to any state n like we saw- like path- like walk walk, tram tram tram, walk tram tram. This is one potential like optimal path that one can get, okay? But the thing is, uh, the world is not perfect like, like modeling is actually really hard, like it's not that we always have this nice model with everything. And we could end up in scenarios where we have a search problem, and, and we don't actually know what the costs of our actions are. So we don't actually know what the cost of walking is, or what the cost of tram is. But maybe we actually have access to, to this optimal path. Like, maybe I know the optimal path is walk walk tram tram tram, walk tram tram, but I don't know what the costs are. So the point of learning is, is to go about learning what these cost values are based on this, this optimal path that we have. So, so I want to actually learn the costs of walking is 1, and the cost of tram is 2. And this is actually a common problem that we have like in machine learning in general. So like for example, um, you might have data from, uh, how a person does something or like how a person, let's say, like grasps an object. And I, I have no idea what was the cost that the person who was optimizing to grasp an object, right, but I have like the trajectory I know like what, what the path they took when they picked up an object. So what I can do is, if I have access to that path of how they picked up an object, then from that I can actually learn what was the cost function that they were optimizing, because then I can put that cost function maybe on a, on a robot that does the same thing. Question? [inaudible] like five or something. That's a good question. So the question is, is it possible to have multiple solutions here? Yes, so we are gonna actually see that like later, like what sort of the solutions that we gonna get, are there, ther- there could be cases where we have multiple solutions. The ratio of it is the thing that matters. So if you have like, walk is 1, tram is 4, if you get to an 8, you kind of get the same sort of behavior. Uh, and then it also depends on what sort of data you have. Like if your data allowed you to actually recover the, the, the true solution. So, so we're gonna actually talk about all these cases, okay? All right. Okay. So if you think about it, when the way- the search problem we were trying to solve, this, this was the inference problem, was when you are, you are given kind of a search formulation and you are given a cost, and, and our goal was to find the sequence of actions, this optimal sequence of actions, that was the shortest path or the best path and, and some path or some way, and this is a forward problem. So search is this forward problem, where you're given a cost and you want to find a sequence of actions, okay. So it's interesting because learning in some sense is, is an inverse problem. It's the inverse of, of search. So the inverse of search is, if you give me that sequence of actions, the, the best sequence of actions that you've got, then can you figure out what the cost is? So, so in some sense you can think of learning as this inverse problem of, of search and, and we are going to kind of address that. So I'm going to go over one example to, to talk about, er, learning. Um, and I'm actually going to use the notation of, uh, the machine lea- learning lectures that we had, um, at the beginning of like last week basically. So, um, let's say that we have, ah, maybe I can draw this. [NOISE] Um, yeah, I will just draw the scheme. So let's say we have a search problem without costs, and, and that's our input. So if- so, so we are kind of framing this problem of learning as a prediction problem. And if you remember prediction problems, in prediction problems we had, ah, an input. So our input was x, okay. And in, in this case you are saying our input is a search problem, search problem without costs, okay? So that is my input. And then we have outputs. And in this case my, my output y is this optimal sequence of actions that one could get- gets, so it's the solution path, so it's a solution path, okay. And what I wanna do is, I wanna- like, like if you remember machine learning, the idea was, I would wanna find this predictor, this f function, f that we take an input, f of x, and then it would basically return the solution path in other settings and it would generalize. So, so that was kind of the idea that we explored in machine learning, and you kinda wanna do the same thing in here. So, uh, let's start with- um, I'm going to draw that here. So let's start with an example where we are in city 1, and then maybe we walk to city 2, so we can walk to city 2. And then from there, maybe I have two options. I can keep walking to get to city 4. So I can do walk walk walk. Or maybe I can take the tram and end up in city 4, okay? And, and the thing is I don't actually know what the costs of these, these actions are, I don't know what the cost of do- uh, walk is, what the cost of tram is. Okay? But one thing I know is that my, my solution path, my y is equal to walk, walk, and walk. So, um, so one way to go about this is to actually start with some initialization of, of these costs. So the way we're defining these costs are going to be, uh, I'm going to use the word, um, I'm gonna write here maybe. I'll just write up here. I'm going to use w like, because I want to use the same notation as as the learning lectures. So w is going to be the weights that o- of, of each one of my actions. I have two actions. In this case I can either walk or I can take the tram so I'm going to call them action 1. So w of action 1 is w of walking. And then w of action 2 is w of taking the tram. So action 2 is taking the tram. So I'm defining these w values, and the way I'm defining these weights is just as a function of actions. This could technically be a function of state and actions but right now I'm just simplifying this and I'm saying the w's is this values, the costs of walking just depend- the cost of going from 1-2 just depends on my action. It doesn't depend on what state I'm in. You could imagine settings where it actually depends on like what city you are in too, okay? So, so then under that scenario what is the cost of, cost of y? It is going to be w walk, plus w walk, plus w walk. Okay? So what I'm suggesting is let's just start with something. Let's just start with- yeah, like let's just start with these weights. So I'm gonna say walking costs 3. And it's always going to cost 3. Again, the reason it's always going to cost 3 is I'm basically saying my weights only depend on the action, they don't depend on state. So it's always going to cost three. And I'm going to say well why not let's just say, the tram takes the cost of 2. Okay? So this doesn't like look right but like let's just say I assume this is the right solution, okay? So now what I wanna do is I want to be able to update these weights, update these values in a way that I can get this optimal path that I have, this, this walk, walk, walk. Okay? So how can I do that? So I started with these random initializations of what the weights are. Okay? So now that I've done that I can, I can try to figure out what is the optimal, optimal path here based on these weights. So what is my prediction, so that is y prime. That is my prediction based on these weights that I've just set up in terms of like what the optimal path is. Well, what is that? That is walk tram because this costs 5 and this costs 9. So with these weights, these random weights that have just come up with I'm going to pick walk and tram. And that is my prediction. Okay? So now what we wanna do is you want to update our w's based on the fact that our true label is walk, walk, walk and our prediction is walk, tram. Okay? And, and the algorithm that kind of does this, this does like the most like silliest thing possible. So, so what it does is it's going to first look at the truth value of W. Okay? So it's going to look at- so, so, so the weights are starting from- so I decided that this guy is 3 and I decided that this guy is 2, and I'm gonna update them. So I'm going to look at every action in this path. And for every action in this path I'm going to down-weight the, the weight of that. Well why am I going to do that? Because I- I don't want to penalize that, right? This is the true thing. I want the weight of the true thing to be small. So I see walk. I'm like okay so I see walk. The weight of that was 3. I'm going to down-weight that by 1. I'm gonna make that two. I see walk again. So I'm gonna bring that with 1. I see walk again, I'm going to subtract one again. I end up at 0. Okay? Now I'm gonna go over my prediction and then for every action I see here I'm going to bring it up, bring the cost, uh, the, the weight up by 1. So I see you walk again here, I'm going to bring it up by 1. So, so, these were subtract, subtract, subtract, bring it up by one because it's over my y prime. And then I see tram. And then because I see tram, I'm going to bring this up by 1. And that ends up in 3. So my new weights here are going to be three- the, the, the, the weight of walk just became 1 and then the weight of tram just became 3. Okay? And, and now I can kind of repeat doing this and see if that gets me this, this optimal solution or not. So I'm going to try running my search algorithm. If I run my search algorithm this path, this path costs 3, this path costs 4. So I'm actually going to get this path and this path. So my new prediction is just going to be walk, walk, walk. They're going to be the same thing. My weights are not gonna change. I'm going to converge. Yes. Is it always one? So I'm talking about a very simplified version of this but yeah it is always one. So the very simplified version of this is this version where I'm saying the w's just depend on, on actions. If you, if you make the weights depend on state and actions, there is a more generalized form of this. This is called the stru- er, the structure pe- er, perceptron algorithm, we'll talk about- briefly talk about the, the version where there is a state action too, but for this case we are just depending on action. You're literally just bring it up by one or by whatever like by whatever you bring it up here, you gotta bring it down by the same thing. So, so it's plus and minus a whatever a is. There's a question. [inaudible] why we do the plus 1 after we do all the minus 1s? So why am I doing the minus 1s? So I'll get to that. So, so when I look at y here, right? Like this is the thing that I really wanted. So if I- so when I see walk I realize that walking was a good thing, so I need to bring down the weight of that. But if, if the weights that I already had like knew that walking is pretty good then like the weights that I already had knew that walking is pretty good, I should like cancel that out. So, so that's why we are doing the plus 1 because like at this stage like I knew walking is pretty good up here like like my prediction also said walk. So if, if I'm subtracting it, I should add it to, to kind of like get them cancel that. But like right here, like I didn't know walking is good so I'm going to bring down the weight of that and then bring up the weight of, uh, tram. [inaudible]. Yeah. So, so I mistakenly thought tram, uh, is the way to go. So to avoid that next time around, I'm going to make the cost of tram higher so I don't take that route anymore. And there's a question there. So here- only like the only reason why [inaudible] in the second- in the, the y prime is because we know the y prime is different from y. Yes. But then, like what if like we have like a long sequence and y prime is only different in like one small location and like would that change the weights sufficiently? Yeah. So if, if, er, so you're asking. Okay, if my y and y prime, prime are kinda like the same thing walk, walk, walk or something and then at the very end this last one they're going to be different. Yeah. So like we were just and for that last one we are just adding one, right? So, so it does like weighted, er, it does actually address that and it just run- you can run it until you get the sequences to be exactly the same thing so you don't have any mistakes. Yeah. There's a question back there. Does it matter if our new cost become negative? Uh, does it matter if our new costs become- it depends on what, sort of, search algorithm you are using. Uh, at the end of the day it's fine if you're using dynamic programming so I can have like a negative cost here and I'm just calling, uh, like dynamic programming at the end of the day with that and that is fine. Yeah, it's fine if the cost becomes negative. There's a question. In this problem we want to find the true cost for walk and tram, but we ended up converging to 1. So this becomes a problem. Sorry, did not supposed- Just like the end result for this algorithm we got is 1 for walk and 3 for tram. And the real result, like in the previous example was 1 and 2. 1 and 2. Right, yes. Yeah. So the, so the question is, er, we got here 1 and 3. Is this actually right? Like, like if you remember like when we define this tram problem, we said walking costs 1 and tram costs 2 but we never got that. Well, the reason we never got that is the solution we are going to get here is just based on our, our training data. So if my training data is just walk, walk, walk, this is like the best thing I can get and I can kind of like converge to this solution where, where the two end up being equal. I don't have any mistakes on this. If I have more like data points then I'm going to do this longer and actually try it out on other training data and, and then I might converge to a different thing. Is there any rule for as far as initializing the weight? Is- I, I, I, I, I'm assuming when- the fu- uh, further when we are from the actual truth, the longer it's going to take to, uh, actually converge. It's- o- okay so the question is how do we initialize? So in na- in a natural algorithm you're just initializing with 0. So we're initializing everything by 0. It's actually not that bad because you just, you just basically have this sequence and in the- for the more general case you're computing a feature value that you just compute the full thing and you just do one single subtraction. So it is not that costly actually to do this. Yeah. [inaudible] know the path for a given cost. If you have that input can we incorporate that into the algorithm? So, you're saying if we have some prior knowledge about the cost can we incorporate it? Yeah. Um, that is interesting. So, uh, in this current format. So if you have some prior algorithm maybe you'll like then your prediction is going to be better, right? So if you have some knowledge about it maybe you'll get a better prediction and then based on that you don't update it as much. So maybe you can incorporate into the search problem. But again this is the most like general form of this algorithm. The simple- kind of, like the simplified version of it also like even like for the action. So not doing anything fancy. It's not doing something that hard either, honestly. Are we worried about overfitting at all? [BACKGROUND] Yeah. So it is going to- it can too- you're- yeah, so I'll show some examples on this. Like we are going to code this up and then we'll see overfitting, kind of, situations. So- so I'll get back to that actually. All right. All right. So, um, all right, so let's move on. Ah, okay. So- so this is just like the things that are on the slides are what I've already talked about. So, uh, yeah, so here's an example. So we start with, 3 for walk and 2 for tram. And then the idea is like how are we going to change the costs so we get the- the solution that we're hoping for. Um, and- and as I was saying, well, we can assume that the costs only depend on the action. So I'm assuming cost of s, a is just w of a, and in the most general form it- it can depend on- on the state too. Um, okay. So then if you take any candidate output path, then what would be the cost of the path? It would just be the sum of these W values over- over all the edges. So it would just be W of a_1 plus W of a_2 plus W of a_3. And as you've seen in this example, the cost of a path is just W of walk, plus W of walk, plus W of walk, or W of walk plus W of tram. So- so that's all this slide is saying. So- so that's how we compute the cost. All right, so- so now, uh, let's actually look at this algorithm like running in practice. Um, okay, let me actually go over the pseudocode. So- so, you start initializing W has to be equal to 0. And then after that we're going to iterate for some amount of T and then we have a training set of examples. It might not be just one here. I just showed this one example like- like, the only training example I had was- was that walk, walk, walk is a good thing, but you can imagine having multiple training examples for a search problem. And then what you can do is you can compute your prediction so that is y prime given that you have some W and the-then you can start with this W equal to zero and then-then just compute your prediction y prime, and then basically, you can do this plus and minus type of action. So for each action that is in your true y that is in your true label, you're going to subtract 1. So to decrease the cost of true y. And then for each action that is in your prediction you're going to add- add one to- to, kind of, increase the cost of the predicted y. Okay. All right. So let's look at implementing this one. And let's try to look at some examples here. All right. So let's go back to the tram problem. So this is again the same tram problem. We just want to use the same, sort of, format. Uh, I actually went back and wrote up the history here. If you remember the last time I was saying I'm not returning the history. Now we have a way of returning history of each one of these algorithms cause we are going to call dynamic programming and we need the history. All right. So let's go back to our transportation problem. So we had a cost of 1 and 2 for walking and tram, but what we wanna do is we wanna put parameters there. So you wanna actually put this weight and we can give that to our transportation problem. So in addition to the number of blocks, now I'm going to actually give like the weight of different actions. Okay. All right. So then walking has a weight and, um, [NOISE] tram has a weight. So now I have updated my transportation problem to generally take different weight values. So- so, now we wanna be able to generate some- some training examples. So that's what I wanna do. I wanna generate different types of training examples that- that we can call so we can get these true labels. So let's assume that the true weights for our training example is just 1 and 2. So- so that is what we really want. Okay. And- and we're going to just wri- write this prediction function that we can call up later to- to- to get different values of y. So the prediction function is going to get the number of blocks. So- so it's going to get, um, N, the number of blocks here. And it is going to act with this path that we want. So it's going to output these- these y values, this different path. Okay. So, all right, so the whole point of prediction is- is basically, like running this f of x function. Um, and we can define our transportation problem with- with n, n weights. And the way we are going to get this is by calling dynamic programming. So someone asked you earlier could the costs be negative? Well, yes because now I'm calling dynamic programming and if like this problem has negative cost, that is fine too. Um, So and the history is going to get and the action new state and- and costs, right? So but the thing that I actually wanna return from my predict function is a sequence of actions. So I'll just get the action out of this history that I get from dynamic programming. So I'm calling dynamic programming on my problem that is going to return a history or get the sequence of actions from that, and that is my predict function and I can just call that later. So let's go back to generating examples. So, um- [NOISE] so, I'm just going to go for, uh, try out n to go from 1-10. So 1 block to 10 blocks and we are calling the predict function on these true weights to get the true y values. So these are my true labels, okay? And those are my examples. So my examples are just calling generate examples here. Okay. So let's just print out our examples. See how it looks like. We haven't done anything like in terms of like the algorithm or anything. We're- we're just creating these training examples, um, by calling this predict function on- on the true weights. I have a typo here, [LAUGHTER] generate examples and I need parentheses, oh, fix the typo. Okay, so that kinda looks right, right? So that's my training example 1 through 9. And then what is- what is the path that you would wanna do if- if you have these two weights, the 1 and 2. Okay. So now I have my examples. So I'm- I'm ready to write this structured Perceptron algorithm. It gets my examples. It gets the training examples which are these paths. Um, and then we're going to iterate for some range. And then, um, we can, um, basically go over all the examples that we have in our true- true y values. And then we can- we can basically go and update our weights based on- based on that and based on our predictions. So let's initialize the weights to just be 0. So that's for walking and tram, they're just 0. And, uh, prediction actions, this is when we're calling predict based on the- the current weights. So if my current weights are 0 then pred actions is just that y prime. So pred actions is y prime, true actions is y, like the things that we had on the slides. If- okay, and- and I wanna count the number of mistakes I'm making too. So if the two are not equal to each other then I'm going to just keep a counter for number of mistakes. If- if the two become equal then- then my number of mistakes is zero. I'm going to break then maybe I'm happy then. Okay. So I make a prediction. And then after that I'm going to update the weight values. Okay. So how do I update? Well, basically subtract. If you're in true actions which is y, the labels that I've created from my training examples and then, uh, do plus 1 if you're in prediction actions based on the current weight values. And- and that's pretty much it. Like- like that is structured perceptron. Okay. So let's just print things nicely so we can print the iteration and number of mistakes we have and what is actually the weight values that we have. And I'm just breaking this, um, whenever I have like no mistakes. So if number of mistakes is 0, I'll- I'll just break this. Okay. Okay. That sounds good. So if number of mistakes is 0, then I'll break. [NOISE] Okay. So all good. Uh, I'm gonna run this, it's not gonna do anything because I didn't call it. So I'll go back and actually call it. I have another typo here, I don't know if you guys can guess, like where is my typo. This is gonna give an error [LAUGHTER]. Well, I called it weights, not weight. [LAUGHTER] So, I'll go and fix that. Okay, this should run. Okay. So and then- then, this is what we get. So let's actually look at this. So what we got is the first iteration number of mistakes was 6, and then, uh, we ended up actually, at the fir- first iteration, we ended up converging to 1, 2. So then the second iteration, the number of mistakes just became 0, and then we just got 1, 2, which is- which is the- the weights that we were hoping for. Okay? So that kind of, looks okay to me, that's my training data. Everything looks fine. There's a question actually. [inaudible] more like integers. Is that right? Yeah. So in this case, yeah, we are summing all the weights as integers, and you're adding them. Given our update model as well, Well, we're- we're assuming that the number of walks and the number of trams were different. What if tram was in a different location but the number of walks to the tram can be correct? You would still- So- so I see what you're asking. No. It should- it- like, it should figure- figure it that out. So, um, we- we- we can go over an example after- after the class and I'll show you like how- how it actually does it. All right. So- okay. So let's try 1 and 3. So with 1 and 3 takes a little bit longer, and, uh, but it does recover. So 1 and 4 is actually the interesting one, because it does recover something. It does recover 2, 8. It doesn't recover 1 and 4. But like given my data, actually, 2, 8 is- is like- like, there is no reason for me to get 1- 1 and 4. Like the ratio of them is the thing that that I actually care about. So even if I get 2 and 8, like- like that is a reasonable set of weights that one could get. Um, I'm gonna try a couple of more things. So let's try 1 and 5. So I'm gonna try 1 and 5, and this is what I get. So I get the weight of walk to be minus 1, and the weight of tram to be 1. Now, my mistake is 0. So why is this happening? Yeah. Your training data is all walking. So it's learning to just walk. Yeah, that's right. So- so what's happening here is, if you look at my training data up here, my training data is just has like walk, like all walks. It hasn't seen tram ever, so it has no idea like what the cost of tram is with respect to the cost of walk. So it's not going to learn that. So we're gonna fix that. Like one way to fix that is to go and change the training data and actually like get more data. So, uh, we can kind of do that. Um, so like just one thing to remember is, this is just going to fit your training data, whatever it is. Um, so yeah. So when we fix that, then walk becomes two and tram becomes 9, which is not 1 and 5. But it- it is getting there, like it's a better ratio. Uh, a number of mistakes is still 0. So it really depends on what you're looking for. Like if you're trying to like match your data and your number of mistakes is 0, and you're happy with this, you can just go with this. Um, and even though like it hasn't like actually recovered the exact value, the ratios, that's fine. Or maybe you're looking for the exact ratios and you should like run it longer. More iteration questions? Structured perceptron like suspect to getting stuck in local optima, like maybe, all we need is different initializations? Sorry. Like I was looking at the- can you repeat that? Oh, sorry. Um, does the, uh, structured perceptron, like, have a risk of getting stuck in local optimum, like k-means, so we need different initializations? Um, that is a good question. So in, um, actually, lemme think about that. Um, do you see this in NLP? Do you actually know if this gets into local optima? I haven't experienced it personally, but I feel like there's [inaudible] There is reasons for it to do this. It's still in this kind of- I mean, let me think about this. I'll think about this, because even in the more general form of it, uh, it's commonly used in like- like the matching, like sentence- like words and sentences. So I haven't experienced that either but, um, I can look into that and back to you. Question? I was gonna ask, are you just being at all of the optimal paths, currently? Yes. Yeah, yeah, yeah. But if we do figure all the optimal paths then technically, it should be complex, right? Because like you just match paths. Um, if you're feeding it all the optimal paths, uh, it should- you- you're just matching path, you're saying is- [inaudible] Yeah. So- so in terms of- okay- so, yeah. So in terms of like bringing down the number of mistakes then- then it should always match it. But if you have some true like weights that you are looking for, and it's not represented in your dataset, then it's not necessarily like- like learning that. So- so in those settings, you could find the local optima. So kind of like a- another version of this is, uh, when you are doing like reward learning and- and you- you actually have this true reward you wanna find. Like in those settings, you can totally fall into like local optima because you want to find what your reward function is. But you're right, like if you're just matching, uh, the data. Just in the reward function, you are on the scaling two, you still get like the optimal policies. So the scaling would be a different problem, right? So the scaling is kinda- yeah, so you can have reward shaping, so you can have different versions of the rewards function, and if you get any of them, that is fine. Uh, but, uh, but you might still get into local optima that's not explained by reward shaping. So okay. So that we- we can talk about these things offline. Maybe, I should just move on to the next topics because we have some more stuff going on. Okay, so I was actually going to skip these slides because we have stuff coming up, but this is a more general form of it. So remember I was saying, this w is a function of a. Ah, but, um, [NOISE] um, you could- you could have a more general form, ah, where your cost function is not just w as a function of a, it is actually w times the set of features. Ah, and then the cost of a path is w times the features of a path. Uh, and that's just the sum of features over the edges. So- so you can have this more general form. Go over this slides later on, maybe, because we've gotta move to the next part. But just real quick to update here is- is this more general form of updates which is update your w based on subtracting the features over your- your true- true path plus the features over your predicted path. So- so a more general form of this is called Collins' algorithm. So Mike Collins was working on this in- in natural language processing. He was actually interested in it in the setting of part of speech tag- er, tagging. So- so you might have like a sentence, uh, and- and you wanna tag each one of the- each one of the labels here as- as a noun, or a verb, or a determiner, Or a noun again. So- so he was think- he was basically looking at this problem as a search problem. Uh, and he was using like similar type of algorithms to- to try to figure out like- like match what- what the value, like match noun, or like each one of these, um, part of speech tags to the sentence. So he has some scores and then based on the scores and his dataset, he goes like up and down. He moves the scores up and down which uses the same idea. You can use the same idea again in machine translation. So you can have, like if you have heard of like Beam Search. Um, and you can have multiple types like- like a bunch of translations of- of some phrase and then you can up-weight and down-weight them based on your training data. Okay? All right. Okay. So now let's move to ai's- ai's- a star, not ai star. A star search. All right. So, um, okay. So we've talked about this idea of learning costs, right? So we have talked about, uh, search problems in general doing inference and then doing, uh, learning on top of them. And then now, I wanna talk a little bit about, um, kind of making things faster using smarter ideas and smarter heuristics. There's a question. [inaudible] see what is the loss from [inaudible] in this structure? In this structure? So, so in, in- this is, this is a prediction problem, right? So, so in that prediction problem, we are trying to basically figure out what w- w's are as closely as possible as we are matching these w, w- this y prime to y, right? So, so basically, like, like the way we are solving this is, is not necessarily as an optimization, the way that we have solved other types of learning problems. The way we are solving it what- is by just like tweaking these weights to try to match my y as closely as possible to, to y, okay? All right. Okay. So let's get- talk, talk about a A-star. So I don't have internet so I can't show these. Um, but I think the link for this should work if- when you go to the, to the file. So the idea is, if you go back to uniform cost search, like in uniform cost search, what we wanted to do was, we want to get from a point to some solution, but we would uniformly, like increase, uh, explore the states around us until we get to some final state. The idea of A-star is to basically do uniform cost search, but do it a little bit smarter and move towards the direction of the goal state. So if I have a goal state, particularly like in that corner, maybe I can, I can move in that direction in a smarter way, okay? So here is like an example of that pictorially. So I can start from S-start, and, and if I'm using uniform cost search, again I'm uniformly kind of exploring all the states possible until I hit my S-end. And then I'm happy, I'm done, I've solved my search problem, everything is good. But the thing is, I've done all these, like wasted effort on this site which is, which is not that great, okay? So uniform cost search in, in that sense has this problem of just exploring a bunch of states for no good reason, and what we wanna do is we want to take into accounts that we're just going from S-start to S-end, so we don't really like need to do all of that. We can actually just try to get the- to get to the end state, okay? So, um, so going back to maybe, um, I'm going to go on this side. So, um, [NOISE] going back to how these search problems work, the idea is to start from S-start and then get to some state S, and then we have this S-end, okay? And what uniform cost search does is, it basically orders the states based on past cost of s, okay? And then explore everything around it based on past cost of F- S until it reaches S-end, okay? But when you are in state S, like there is also this thing called future cost of s, right? And ideally, when I'm in state S, I don't wanna explore other things like this side. I actually want to- wanna move in the direction of kind of reducing my, my future cost and getting to my, to my end state, okay? So, so the cost of me getting from S-start to S-end is really just like past cost of s plus future cost of s. And if I knew what future cost of s was, I would just move in that direction. But if I knew what future cost of s is, well the problem was solved, right? Like I had the answer to my search problem. Like I'm, I'm solving a problem still. So in reality, I don't have access to future cost, right? I have no idea what future cost is. But I do have access to some- like I can potentially have access to something else and I'm gonna call that h of s. And that is an estimate of future cost. So I'm going to add a function called h_s, and this is called a heuristic, and the- and this heuristic could estimate what future cost is. And if I have access to this heuristic, maybe I can update my cost to be something as what the past cost is. In addition to that, like I can add this heuristic and that helps me to be a little bit smarter when I'm running my algorithm, okay? So, so the idea is, ideally like what I would wanna do is, I wanna explore in the order of past cost plus future cost. I don't have future cost or if I had future cost, I had the answer to my search problem. Instead, what A-star does is it's- it explores in the order of past cost plus some h_s, okay? So remember uniform cost search, it, it explores just in the order of past cost. So in uniform cost search, um, like we don't have that h_s, okay? And h_s is, is a heuristic, it's an estimate of the future cost. All right. So what does A-star do? Actually that's something really simple. So, so a A-star basically just does uniform cost search. So all it does is uniform cost search with a new cost. So before I had this blue costs costs of s and a, this was my cost before. Now I'm going to update my cost to be this cost prime of s and a, which is just cost plus the heuristic, over the successor of s and a minus the heuristic. So, so that is the new cost and I can just run uniform cost search on this new cost. So, so I'm gonna call it cost prime of s and a. Well, what does that equal to? That is equal to cost of s and a, which is what we had before when we were doing uniform cost search, plus heuristic over successor of s and a, minus heuristic over s. So why do I want this? Well, what this is saying is, if I'm at some state S, okay, and there is some other state, successor of s and a, so I can take an action a and end up in successor of s and a, and there is some S-end here that I'm really trying to get to. Remember h was my estimate of future cost. What this is saying is, my estimate of future cost for getting from successor to S-end, minus my estimate of, er, getting from, er, future costs of S to S-end should be the thing I'm adding to my cost function. I should penalize that. And, and what this is really enforcing is, it basically makes me move in the direction of S-end. Because, because if I end up in some other state that is not in the direction of S-end, then, then that thing that I'm adding here is basically going to penalize that, right? It's going to be saying, "Well, it's really bad that you've- you are going in that action. I'm going to put more costs on that so you never going that direction. You should go in the direction that goes goes towards your S-end." And that all depends on like what your H function is and how good, like of an H function you have and how you're designing your, your heuristics. But that's kind of the idea behind it. So here is an example actually. So let's say that we have this example where we have A, B, C, D, and E and we have cost of 1 on all of these edges. And what we wanna do is we wanna go from C to E. That's our plan, okay? So if I'm running uniform cost search, well what would I do? I'm at C, I'm going to explore B and D because they have a cost of 1, and then after that, I'm going to explore A and E. And then finally, I get to, get to E. But why did I spend all of that time looking at A and D? I shouldn't have done that, right? Like A and B are not in the direction of getting to S-end. So instead, what I can do is if someone comes in and tells me, well, I have this heuristic function, you can evaluate it on your state and this heuristic function is going to give you 4, 3, 2, 1, and 0 for each one of these states, then you can update your cost and maybe you'll have a better way of getting to S-end. So this heuristic, in this case, is actually perfect because it's actually equal to future cost. Like the point of the heuristic is to get as close as possible to the future cost. This is exactly equal to future cost. So with this heuristic, what's going to happen is my new cost is going to change. How is it going to change? Well, it's going to become the cost of whatever the cost of the edge was before, which was 1, plus h of- in the case of, for example, the cost of going from C to B. If you look at C to B, it's the old cost, which was 1, plus heuristic at B, which is 3, minus heuristic at C, which is 2. So that ends up giving me 1 plus 3 minus 2, that is equal to 2. And then similarly, you can compute like all these, like new cost values, the purple values and, and that has a cost of two for going in this direction and cost of zero for going towards E. And, and if I just run uniform cost search again here, then I can get to E like much easier, okay? Yes. Does an A-star like kinda result in greedy approaches, where you put these opportunities, like go back with [inaudible]. Does A-star result in- Like greedy approaches. Like where you sort of- greedy. Greedy? Yes. Um, yeah. So okay. So, so in all, ah, so, so the question is, is A-star like causing greedy approaches? So, no. Actually, we are going to talk about that a little bit. A-star, depend- depends on the heuristic you are choosing. So depending on the heuristic you are choosing, A-star is actually going to be like returned to optimal value. But yeah, it does depend on the heuristic. So it actually does the exact same thing as uniform cost search if you choose a good heuristic. Why is cost of CB 1 here? Uh, what- Why is cost of CB 1? Why is cost of C- CE 1? CB. CB. Hold on. [LAUGHTER]. I'm like, really bad, my ears are really bad, so speak up. So cost of CB. Oh because- oh, I see what you're saying. That's what we started with. So this is like the graph that I started with. So I started with the cost, like the blue costs being all 1, but now I'm saying those costs are not good, I'm going to update them based on this heuristic so I can get closer to the goal, like as fast as possible. [inaudible]. You return like the actual cost of not, like you wouldn't count the heuristic in there, because it can be like wrong. That's, that's right. So, so the question is what costs are you going to return at the end? And you do want to return the actual cost. So you're returning the actual cost, but you can run your algorithm with this heuristic thing added in because that allows you to explore less things and just be more efficient. Okay. Oh, I gotta move on. All right. So, um. Okay. So a good question to ask is well, what is this heuristic? How does this heuristic look like? Like can any- does any heuristic like work well? So turns out that not every heuristic works. So here's an example. So again, the blue things are the costs that are already given. These are the things that I already have, and I can just run my search algorithm with it. The red things are the values of the heuristic, someone gave them to me for now. In general we would want to design them. So someone comes in and gives me these, these heuristic values, and, uh, then what I wanna do is I wanna compute the new cost values. So the question is, is this heuristic good? So I get my new cost values. They look like this. Like does this work? We don't have time so I am going to answer that. It's not gonna work. [LAUGHTER] So the reason this is not gonna work is, uh, well we just got a negative edge there, right? So I'm running uniform cost search at the end of the day, like A_star is just uniform cost search. Um, and I can't have negative edges. So, uh, I'm not- like that was just not a good heuristic to have here. So, so the heuristics need to have specific properties and, and you, you should think about what those properties are. So one property that you would want to have the heuristics to have is this idea of consistency, this is actually the most important property really. So, um, so when we talked about heuristics, I'm gonna talk about properties of them here. Heuristics h. They should be consistent. So a consistent heuristic has two conditions: The first condition is it's going to satisfy the triangle inequality. And, and what that means is like the cost that- your, your updated cost that you have should be, should be non-negative. So, so this cost prime of s, s and a, this should be positive. So, so that means that the old constant s and a plus h of, um, successor I'm gonna use s prime for that minus h of s is greater than or equal to 0. Okay. So that is the first condition. And then the second condition that you are going to put is that, uh, future costs of s_end is going to be equal to 0, right? Because the future cost of the end state should be 0. So then the heuristic at the end state is also equal to 0. So, so these are kind of the properties that we would want to have if you want to talk about consistent heuristics. Okay. And they're kinda like natural things that we would want to have, right? Like, like the first one is basically saying, well, the cost you are going to end up at should be, should be greater than or equal to 0 and you can run uniform cost search on it. But it's really like talking about this triangle inequality that you want to have, right? Like, er, h of s is kind of an estimate of this future cost. So if I'm going to- from s take an action that cost of s and a that added up h of successor of s, s and a should be greater than just h of s, the estimate of future costs. So that's, so, so that's, that's all it is saying. And then the last one also makes sense, right? I do want my future cost of s_end to be zero, right? So then the heuristic at s_end should also be equal to 0, because again heuristic is just an estimate of the future cost. Okay. All right. So, so what do I know about A_star beyond that? So one thing that we know is that, um, if, if h is consistent. So if I have this consistency property, then I know that A_star is correct. So that there is a theorem that says, A_star is going to be correct if h is consistent. And well, we can kind of look at that through an example. So, so let's say that I am at s_0 and I take a_1 and I end up at s_1 and I take a_2 and end up at s_3 and, uh, a 0 at s_2, take a_3 and I end up at s_3. So let's say that I have, I have kind of like a path that, that looks like this. Okay. So then, uh, if I'm looking at the cost of each, each one of these, right? I'm looking at cost of- cost prime of s_0 and a_1. Well, what is that equal to? That's- that's my updated cost. Updated cost is old cost, which is cost of s_0 and a, plus heuristic value at s_1 minus heuristic value at s_0. Heuristic value s_1 minus heuristic value at s_0. Okay. So, so that is the cost of going from s_0 and taking a_1. I'm gonna to just write all the costs for, for the rest of this to figure out what's the cost of the path. The cost of the path is just the sum of these costs. So s_1, a_2 is cost of s_1, a_2 plus heuristic at, um, what is it? S_2 minus heuristic at s_1, so that is the new cost of this edge. And the new cost of the last edge which is cost prime of s_2, a_ 3, and that is equal to the old cost of s_2, a_3 plus heuristic at s_3 minus heuristic at s_2. Okay. So I just wrote up all these costs. If I'm talking about the cost of a path, then it's just that these costs added up, right? So if I add up these costs, what happens? Bunch of things get canceled out. All right. This guy gets canceled out by this guy, this guy gets canceled out by this guy, right? And what I end up with is, is sum of these new costs, these cost primes of, um, s_i minus 1, a_i is just equal to sum of my old cost of s_i minus 1, a_i plus my heuristic, I guess last state whose end state minus heuristic at s_0. Okay. I'm saying my heuristic is a consistent heuristic. So what is a property of a consistent heuristic? The heuristic value at s end should be equal to 0. So this guy is also equal to 0. So what I end up with is is if I look at a path with the new cost, the sum of the new cost is just equal to the sum of the old cost minus some, some constant, and this constant is just the heuristic value at s_0. Okay. So, so why is this important because when we talk about the correctness, like remember we just proved at the beginning of this lecture that uniform cost search is correct, so the cost that it is returning is optimal. That is, that is this cost. A_star is just uniform cost search with a new cost. So A_star is just running on this new cost. But this new cost is the same thing that they have as old cost minus a constant. So if I'm optimizing the new cost, it's the same thing as optimizing the old cost. So it is going to return the optimal solution. Okay. All right. So that is basically the same things on the slide like, like I basically did that. So, so that's one property, right? So, so we talked about heuristics being consistent. We have now just talked about A_star being correct, because it's uniform cost search. It's, it's correct only if the heuristic is consistent, right? Like only if we add that property. Because, because that consistency gets us, gets us the fact that this guy is equal to 0 and gets us the fact that these guys are going to be positive and I can run uniform cost search on them. Um, the next property that we have, uh, here for A_star is A_star is actually more efficient than uniform cost search, and we kind of have already seen this, right? Like, like the whole point of a A_star is to not explore everything and explore in a directed manner. So, um, if you remember uniform cost search like, how does it explore? Well, it explores all the states that have a past cost that are less than the past cost of s_end. So again, remember, uniform cost search, you're exploring with the, with the order of past cost of states, and then we explore all those states that have past costs less than the end state. Okay. A_star like- the thing that A_star does is it explores less states. So it explores states that have a past cost less than past cost of the end state minus the heuristic. So, so if you kinda look at the right side, the right side just became- becomes smaller, right? Like, like the right side for uniform cost search was just past cost of s_end. Now it is past cost of s_end minus the heuristic, so it just became smaller. And then why did it become smaller? Because now I'm doing this more directed search. I'm not searching everything uniformly around me. And then that's the whole point of the heuristic. Okay. And that makes it actually more efficient. So- and then kind of the interpretation of this is if h is larger then, then that's better, right? Like if my heuristic is as large as possible, well that is better because then I am kind of exploring a smaller like area to, to get to the solution. Uh, the proof of- this is like two lines so I'm gonna skip that. So let me actually show, uh, how this looks like. So if I'm trying to get from s_star to s_end, again, if I'm doing uniform cost search, I'm uniformly exploring. So like all states around me, and that is equivalent to assuming that the heuristic is equal to 0, like it's basically uniform cost search is A-star when the heuristic is equal to 0. So what is the point of the heuristic? The point of the heuristic is to estimate what the future cost is. If I know what the future cost is, then, then h of s is just equal to future cost. Uh, and then, that would be awesome and I only need to like explore that green kind of space. And then the thing I'm exploring is, is just the nodes that are on the minimum past cost and co- uh, cost path, and I'm not exploring anything extra, right? Like that's the most, like efficient thing one can do. In practice, like I don't have access to future costs, right? In, in practice if I had access to future costs, like the problem was solved. I have access to some heuristic that is some estimate of the future cost. It's not as bad as uniform cost search, it's getting close to future costs, like, like the value of future costs, and you're kind of somewhere in between. So it is going to be more efficient than uniform cost search in some sense. Okay. All right. So, so basically the whole idea of A_star is it kind of distorts edge, edge costs and favors these end states. So I'm going to add here that A_star is efficient too. So that is the other thing that, that we have about A_star. Okay. All right. So, so these are all cool properties, um, one more property about heuristics and then after that, we can talk about relaxation. So um, so there's also this other property called admissibility, which is something that we have kind of been talking about already, right? Like we've been talking about how this heuristic should get close to FutureCost and should be an estimate of the FutureCost. So an admissible heuristic is a heuristic where H of S is less than or equal to FutureCost. And then the cool thing is, if you already have consistency, then you have admissibility too. So if you already have this property, then you have admissibility too. So another property is admissible. Which means H of S is less than or equal to FutureCost of s, okay? All right. So the proofs of these are again like just one liners, so this one is more than one line but- [LAUGHTER] but it's actually quite easy, it's in the notes. So you can use induction here to prove, uh, to prove that if you have consistency, then you're going to have admissibility too. Okay, so, so we've just talked about how A-star is a sufficient thing. We've talked about how we can come up with- we haven't talked about how to come up with heuristics, but we have talked about consistent heuristics that are going to be useful and they are going to give us admissibility and they're going to give us correctness and how like A-star is going to be this very efficient thing. But we actually have not talked about how to come up with heuristics. So let's spend the next, yeah, couple minutes talking about, uh, talking about how to come up with heuristics. And in the main idea here, is just to relax the problem. Just relaxation. So, so what are- so, so the way we come up with heuristics is, we pick the problem and just make it easier and solve that easier problem. So, so that is kind of the whole idea of it. So remember the H of S is- is supposed to be close to FutureCost, um, and, and some of these problems can be really difficult, right? So the- so if you have a lot of constraints and it becomes harder to solve the problem, so if you relax it and we just remove the constraints, we are solving a much easier problem and that could be used as a heuristic, as a value of heuristic that estimates what the FutureCost is. so, um, so we want to remove constraints and when we remove constraints, the cool thing that happens is, sometimes we have closed form solutions, sometimes we just have easier search problems that we can solve and sometimes we have like independence of problems and we can find the solutions to them, and that gives us a good heuristic. So, so that is my goal, right? Like I would want to find these heuristics. So let me just go through a couple of examples for that. So, so let's say I have a search problem and I want to get the triangle to get to the circle, and that is what I wanna do and I have all these like walls there and that just seems really difficult. So what is a good heuristic here? I'm going to just relax the problem. I'm gonna remove like all those walls, just knock down the walls and have that problem. That- that just seems much easier, okay? So- so well, like now, I actually have a closed form solution for getting the triangle, get to the- get to the circle. I can just compute the Manhattan distance and I can use that as a heuristic. Again, it's not going to be the- like actually like what FutureCost is, but it is an approximation for it. So- so usually, you can think of the heuristics as, as these optimistic views of what the FutureCost is, like, like it's an optimistic view of the problem. Like what if there was like no walls. Like if- if there are no walls here, then how would I get from one location to another location? The solution to that is going to give you this FutureCost- this estimate of FutureCost value which is- which is H of S. Okay? Or the tram problem, let's say we have the tram problem but we have a more difficult version of it where we have a constraint. And this constraint says, "You can't have more tram actions than walk actions." So now this is my search problem, I need to solve this. This seems kind of difficult. Like we talked about how to come up with states word last time and even that seemed difficult, like I need to have the location, I need to have the difference between the walk and tram. That seems kind of difficult, like- like I have an order of N squared states now. So instead of doing that, well, let me just remove the constraint. I'm- I'm just gonna remove the constraint, relax it. And after relaxing it, then I have a much easier search problem I need to deal with. I only have this location, and then I can just go with that location and, and everything will be great. Okay? All right. So, so the idea here was like where, where, where this middle part is, if I- if I remove these constraints, I'm going to have these easier search problems, these relaxations. And I can compute the FutureCost of these relaxations using my favorite techniques like dynamic programming or uniform cost search. But- but one thing to notice is, I need to compute that for 1 through N. Because is heuristic is a function of state, right? So I actually need to compute FutureCost for this relaxed problem for all states from 1 through N. Uh, and that allows me to have like a better estimate of this. There are some, uh, like engineering things that you might need to do here. So, so for example, um, you might- so, so here we are looking for FutureCost, so if you plan to use uniform cost search for whatever reason, like maybe Dynamic Programming doesn't work in this setting, you need to use uniform cost search, you need to make a few engineering things to make it work. Because if you remember, uniform cost search would only work on past costs, doesn't work on FutureCost. So you need to like, create a reverse problem where- where you can actually compute FutureCost. So, so a few engineering things but beyond that, it is basically just running our search algorithms that we know, uh, on, on, uh, these relaxed problems. And that will give us a heuristic value, and we'll put that in our problem and we will go and solve it. Okay? Um, and another cool thing that heuristics give us, is, is this idea of having independent subproblems. So, uh, so here's another example. I want to solve this- this eight puzzle and I move blocks here and there and come up with this new configuration, um, that seems hard again. A relaxation of that is just assume that the tiles can overlap. So the original problem says, the tiles cannot overlap. I'm just gonna relax it and say, "Well, you can just go wherever and you can overlap." Okay? So that is again much simpler and now I have eight independent problems for getting each one of these points from one location to another location and I have a closed form solution for that because that's again just Manhattan distance. So that gives me a heuristic, that- that's an estimate. That's not perfect, it's an estimate. And then I can use that estimate in my original search problem to solve the search problem. So here were- it was just some examples of this idea of removing cons- removing constraints and coming up with better heuristics. So like knocking down walls, like walk and tram freely, overlapping pieces, er, pieces and that allows you to kind of solve this new problem, uh, and, and the idea is you're reducing these edge costs from infinity to some finite- finite cost. Okay? All right. So, um, yeah, so, so I'm gonna wrap up here, uh, and I guess we can always talk about these last few slides next time, uh, since we're running late, uh, but I think you- you guys have got like the main idea. So let's talk next time. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Deep_Learning_Stanford_CS221_AI_Autumn_2019.txt | Okay, so let's begin. Um, first of all, um, I want to say congratulations, you all survived the exam. Uh, well you don't have your grades back but you, you completed it. Um, so yeah, I just want to say, so we're gonna have the grades back as soon as we can. Um, the CAs are all busy grading. And we're actually gonna cancel office hours today so we can focus on getting those grades back to you quickly. Um, after this, the course definitely goes downhill. So you guys can [LAUGHTER] kind of like take a breath. Um, so after the exam there's pretty much just two things left. So there's the project, um, so the final presentation, uh, the poster session for the project is going to be I believe the Tuesday after vacation. Um, it's like a big auditorium hall, there's gonna be a lot of people from industry and academia. Um, it's really exciting to have like, you know, so many smart people showing off their hard work. Um, and then you have the last p-set which is logic. Yeah? It's on Monday. Oh Monday, okay, yeah. Um, so, so right after you're back from vacation is that poster session. Um, and then on Thursday is the last piece of it is due, logic. Um, so logic is, um, uh, so this is not like my official opinion but lo- I think logic when I took the class was easier than the others, uh, it doesn't take as much time so you guys are definitely past the hardest point in this class. Yeah, I think, I think that's the general opinion, yeah. Um, but that being said, I wouldn't wait until the last minute, so I'd still start early and- Personally, I didn't get the [inaudible]. [LAUGHTER] Then, um, yeah so Piazza and the office hours will be your best friend, yeah. Um, okay. So but today though we're talking about this fun advanced-ish topic which is deep learning. Um, I say ish because I think a lot of you are probably working, um, on deep learning or have heard of it already. A lot of you have it in your projects, um, and today, um, we just kinda do a very, very high level broad passive, a lot of different subjects within deep learning. Hopefully get you excited, um, give you kind of like a shallow understanding of a lot of different topics so that if you wanna take, um, follow-up classes like 224N or, um, 229 even, uh, then you'll be armed with some kind of background knowledge. Um, okay, so first we're gonna talk about the history. So deep learning, you've probably heard of it, it's really big especially in the last five, ten years, but it's actually been around for a long time. Um, so even back to the '40s, there's this era where people are trying to build more computational neuroscience models. They noticed that they knew back then that, you know, there's neurons in the brain and they're arranged in these networks, and they know that intelligence arises from these small parts. And so they really wanted to model that. Uh, the first people to really do this were McCulloch and Pitts. Uh, so Pitts was actually a logician, and it was, um, they were concerned with making these kind of like logical circuits out of a network like topology. Um, like what kind of logical expressions can we implement with the network? Um, back then this was, this was all just a mathematical model. Like there was no backpropagation, there were no parameters, uh, there was no inference. It was just trying to, uh, write about fru, I guess like theorems and proofs about what kind of problems these structures can solve. Um, and then Hebb came along about 10 years later and started moving things in the direction of I guess like training these networks. Uh, he noticed that if two cells are firing a lot together, then they should have some kind of connection, um, that is strong. Uh, this is- was inspired by observation. So there's actually no formal math theory backing this. There was, a lot of it was just, uh, very smart people making, um, conjecture. And then it wasn't until the '60s, um, that, so neural networks was I guess you could say maybe in the mainstream like a lot of people were thinking about it and excited about it, until 1969 when Minsky and Papert they released this, uh, very famous book called Perceptrons, uh, which was this like big fat book of proofs. And they were basically talking about the, they approved a bunch of theorems that were about the limits of, uh, very shallow neural networks. Um, so for example, [NOISE] um, early I think very, very early in this class we talked about the XOR example where if you have, um, two classes and they're arranged in this, um, configuration then there's no, there's no linear classification boundary that you can use to separate them and classify them correctly. And so th- Minsky and Papert in their book Perceptrons they came up with a lot of these, um, I guess you could say like counterexamples, um, like that a lot of theorems that really proved that these thin neural networks couldn't really do a lot. Um, and at the time it, it was, it was a little bit of a killing blow to neural network research. Uh, so mainstream AI became much more logical and neural networks were pushed very much into I guess a minority group. Uh, so there's all these people thinking about and working on it. But the mainstream AI went definitely towards kind of the symbolic logic based, um, methods that Percy has been talking about the last couple of weeks. Um, but like I said, there's still these people in the background working on it. So, um, for example in 1974, um, Werbos came up with this idea that back-propagation that we learned about using the chain rule to automatically update weights in order to improve predictions, um, and then later on, um, so Hinton, and Rumelhart, and Williams, they kind of I guess you could say they, um, popularized this, so they, they definitely I guess you could say rediscovered, um, Werbos's findings and they really said, "Oh, hey everybody, you can use backpropagation." Um, and it's a mathematically, well kinda like well-founded way of training these like deep neural networks. Um, and then in the '80s, uh, so today we're gonna talk about two types of neural networks; convolutional neural networks and recurrent neural networks. And the convolutional networks trace back to the '80s. So there's this neocognitron that was invented by a Japanese, uh, Fukushima, [NOISE] and it kind of laid out the architecture for a CNN, but there was no way of training it. And in the actual paper, they used hand-tuned weights. They're like oh, hey, there's this architecture you can use and basically we just like by trial and error came up with these numbers to plug in and, and look at how it works. Uh, now it just seems like insane, but back then that was this, you know, there were no ways of training these things. Um, until LeCun came about 10 years later, and so, um, he applied those ideas of backpropagation to CNN's. And LeCun actually came up with a, so there's the LeCun Net which was a very famous check reading system, um, and it was one of the first like industrial large-scale applications of deep learning. Uh, so whenever you write a check in and have your bank read it, um, almost all the time there's a machine-learning model that reads that check for you and, um, those check reading systems are some of the oldest machine-learning models that have been like used at scale. And then later, so recurrent neural networks came in the '90s, so Elman kinda proposed it and then there's this problem with training it that we'll talk about later, um, called expect- exploding or vanishing gradients. And then, um, Hochreiter and Schmidhuber, about 10 years later came out with I guess you could say maybe a, it solved to some extent those issues with a long short-term memory network, an LSTM. And we'll talk about that later. Um. And then- but I guess you- you could still say that, um, neural networks were kind of in the minority. So in the '80s, you used a lot of rule-based AI, um, in the '90s, people were all about, uh, support vector machines and inventing new kernels. Um, if you remember support vector machine is basically just like a it's- it's a linear classifier with the hinge loss, and a kernel is a way of projecting, um, data into kinda like a non-linear subspace. Um, but it was- the 2000s, people finally started making progress. Um, so Hinton had this cool idea of hey, we can train these deep networks one layer at a time. So we'll pre-train one layer, and then we'll pre-train a second layer and stack that on, third layer stack that on, and you can build up these successive representations, um. And then deep learning kinda became a thing. Er, so this looks like maybe, uh, three-four years ago where they started taking off. And ever since then, it's really been in the mainstream, and you can as kind of proof evidence towards its mainstreamness. Uh, you can look at all of these applications. So speech recognition. Um, for about almost a decade, this performance in speech recognition, um, state-of-the-art recognizers were using a hidden Markov model based um, like that was- that was the heart of these algorithms. And for 10 years, performance just stagnated and then all a sudden neural networks came around and dropped that performance. And what's new and surprising is that all of the big comps so IBM, Google, Microsoft, they all switched over from these classical speech recognizers into fully end-to-end neural network-based recognizers, er, very quickly in a matter of years, and when these- these large companies are operating at scale and they've, you know, dozens maybe hundreds of people have tuned these systems very intricately and for them to so quickly and so radically shift the core technology behind this product really speaks to its power. Um, same thing with object recognition. So there's this er ImageNet competition er which goes on every year that says basically like how well can you say what's in a picture and the first and so for years people use these handcrafted features, um, and all of a sudden AlexNet was proposed and it almost got half the error of the next best submission for this competition, and then ever since then people have been using neural networks. And now if you want to do computer vision, um, you kind of have to use these CNN's, it's just the default, if you walk into a conference, every single poster is going to have a CNN in it. Um, same thing with Go. So, um, um Google DeepMind had a, had a CNN based um algorithm, they trained with reinforcement learning and it beat the world champion in this very difficult game, and then in 2017 it did even better, it didn't even need a like real data just did self play um, and machine translation. So Google Translate for almost a decade had been working on building a very, very advanced and a very well performing classical machine translation system and then all of a sudden, um, the first machine translation system was proposed in 2014-2015, and then about a year later they threw away 10, you know, almost a decade of work on this system and transferred entirely to a completely new algorithm, um, which again speaks to its power. Er, so but what is I guess deep learning like what why is this thing so powerful and why is it so good and, um, I think, um, so broadly speaking it's a way of learning, um, of taking data you can slurp up any kind of data you want like I sequence, a picture, um, even vectors um, or even like a game like Go and you can turn it into a vector, and this vector is going to be a dense representation of whatever information is captured by that data. And this is very powerful because these vectors are compositional and you can use these components, these modules of your deep learning system kind of like Lego blocks, you can, you know, concatenate vectors and add them together and use this to modify you and just the compositionality makes it very flexible. Um, okay. So today we're going to talk about feedforward neural networks, convolutional networks, which work on images, or I guess just anything with repeated kind of structural information in it, recurrent neural networks which operate over sequences, and then if we have time we'll get to some, um, unsupervised learning topics. Okay, so first for feedforward networks, um, so in the very beginning of this class we talked about linear predictors. Linear predictor, um, if you remember is basically you define like a vector w that's your weights and then you hit it with some input, and you dot them together and that just gives you output. Um, and neural networks we defined very similarly. So you can think of each of these hidden units as the result of a linear predictor in a way. Um, so working backwards you- so you have the, you define a vector w and you hit it with some activation function- with some activation, um, like inputs, some hidden inputs, and you dot that with your hidden and you get your output, and then you arrive with what you are hitting by defining a vector and hitting it with inputs. Er, so you use your inputs to compute hidden numbers and then you use your hidden numbers to compute your final output. Um, so in a way, you're kind of like I guess stacking linear predictors, like each, each number. So h1, h2, and f of beta are all the product, I guess you could say they're all the result of like a little mini linear predictor and they're all kind of like roped together. Um, so just to visualize this, if we want to go deeper you just rinse and repeat. So this is- you can say this is a one layer neural network, it's what we were talking about before with linear predictor. You just- you have your vector weights and you apply it to your inputs. For a two layer, you apply, instead of a vector, to your inputs, you apply a matrix to your inputs which gives you a new vector, and then you dot this intermediate vector, this hidden vector with another set of weights and that gives you your final output, and then you can just rinse and repeat. So you pass through a vector, you pass through a matrix to get a new vector. You pass that through another matrix to get a new vector, and then you finally at the very end dot it with a vector to get a single number. Um, so just a word about depth, that's one of the reasons why these things are really powerful. Um, so there's a lot of interpretations for why depth is helpful and why kind of like stacking these matrices works well. One way to think about it is that it learns representations of the input which are hierarchical. So h is going to be some kind of representation of x. H prime is going to be a slightly higher-level representation of x. So for example in a lot of image processing systems, h maybe represents, um, h could represent like the edges in a picture. H prime would represent, um, like corners. H double prime could represent er like small like fingers or something. H triple prime would be the whole hand. Er, so it's successfully, I guess you could say like higher-level representations of what's in the data you're giving it. Um, another way to think about it is each layer is kind of like a step in processing and, um, you can think of it maybe like a for-loop where it's- it's like the more, the more, er, the more iterations you have, the more steps you, have the more depth you have, um, the more processing you're able to perform on the input. And then last, um, the deeper the network is, um, the more kinds of functions it can represent, and so the, um- yeah so there's flexibility in that as well. Um, but in general, there isn't really a good formal understanding of why depth is helpful and I think a lot of deep learning is- there's definitely a gap between the theory and the practice, um. So yeah, so this I guess just goes to show why depth is helpful, so if you input pixels, maybe your first layer is giving you edge detection and your second layer is giving you little eyes or noses or ears and then your third layer and above is giving you whole objects. Um, yeah. So just a summarize; so we have these deep neural networks and they learn hierarchical representations of the data successfully, um, I guess you could say it's like gaining altitude in its perspective, um. You can train them the same way that we learned, er- you can train them the same way that we learned how to train our linear classifiers just with gradient descent, um, so you have your loss function, you take the derivative with respect to your loss and then you propagate the gradients to step in a direction that you think would be helpful. Um, and this optimization problem is difficult, um, so it's non-linear, and non-convex, um, but in general if- we found that if you throw like a lot of data at it, a lot of compute at it then somehow you manage, um. Okay. So it seems like the slides are a little out of order, but basically just to review how you train these things. Um, in general, it's the same as a linear predictor. You define a loss function. So for example, this is squared loss, where you'd say, I'm going to take the difference between my true output and my predicted output, and square that, and then the idea is to minimize this. Um, and the way you do that, is you sample data points from your training data, and you take the derivative of your parameters with respect to this, um, with respect to your loss function, and then you move in the opposite direction of that gradient, which would hopefully move you down on the area surface. Um, so the problem is a non-convex optimization problem. So, er, for example, linear classifier, because it's linear, will have co- it'll- it'll just look like a bowl, um, whereas, these things, you have these non-linear activation functions, and you end up with a very messy looking area surface. Um, and before the 2000s, that was the big- that was the number one thing that was holding back neural networks. Is that they are difficult to get working or hard to train. Um, and so basically the thing that's changed is, one, way faster computers. We have GPUs which can parallelize operations, especially those big matrix multiplications. And then there's a lot more data. Um, that's not entirely true. So there's also a lot of other tricks that we found out recently. So for example, if you have lots of hidden units, then that can be helpful because it gives more- it gives more flexibility, you could say, in the optimization. Like if you have- if you over-provision, if your model has more capacity than it needs, then you can be more flexible with the kind of functions that you can learn. Um, so we have better optimizers. So whereas SGD will make- it'll take- It'll step in the same direction by the same amount every time. We have these newer optimizers like AdaGrad and ADAM, that decide how far to move in a direction, once you've decided the direction. We have dropout, which is where you noise the outputs of each hidden unit, and that makes the model more robust to its own errors, and it guards against overfitting. Um, there's better initialization strategies. So there's things like Xavier initialization, and there's things like pre-training the model on a related data set before moving on to the data you actually care about, and then there's tricks like batch norm, uh, which is where you ensure that the inputs to your neural network units have, uh- are normally distributed. They have mean zero, standard deviation one, and what that does is it allows you to basically take bigger step sizes. Um, yeah, and the takeaway here is that- but i- in general the optimization problem and the model architecture you define are very tightly coupled, and um, it's kind of a black magic to get that right balance that you need, and we're still not very good at it. Um, okay. So we're gonna talk about convolutional neural networks now, and so these operate over images. Um, the motivation is that- okay, so we have a picture here right? And we want to do some kind of machine learning processing on it. Um, we have all the tools that we need to do that. You could say, Okay, each picture, each pixel, is an element in a big long vector, and then I'm just gonna throw that out of matrix. Um, but the thing is- is that- that doesn't really take advantage of the fact that there's spatial structure in this picture. So this pixel, is going to be more similar to this pixel, than this pixel down here. But if you pass this entire thing through a matrix, then every pixel is gonna be treated uniquely and differently, and so we wanna leverage that spatial structure. And the idea to- the core idea is with um, convolutions- so convolutions you have this thing called a filter, which is some collection of parameters, and what you do is you run your filter over the input, um, in order to produce each output element. So for example, this filter when applied to this upper left corner, um, produces this upper left corner of the output, and an application of a filter works kind of like a dot-product. Where you multiply- you multiply all the numbers, and then you add them all up, and so how you produce these outputs, is you take your filter, and you basically just slide it around in the input, um, in order to get your output at the next layer. Um, so- yeah, so this- this example is a little more concrete. So here- so whereas this was a two dimensional convolution, because we had a two-dimensional filter, and we were sliding around in both dimensions. This is one-dimensional. We have a one-dimensional filter, and we slide it horizontally across. So for example, at- at the very left, we apply it. So 1 times 0 is 0, 0 times 1 is 1, and negative 1 times 2 is negative 2. So negative 2 goes in the output, and then we do the similar thing here. So we would dot-product this filter with um, these three numbers in order to arrive at two. Um, one of the advantages of this, is that, whereas a- so if you had- so if you had- let's say you had um, four inputs, you- so this- this is your hidden layer. So what h_1, h_2, h_3, and h_4, and then you had four inputs: X_1, X_2, X_3, and X_4. If you did a regular fully-connected matrix layer, then every one of these is going to be connected to every one of these, and your parameters, you're gonna- you're gonna end up with a four by four matrix, W_11, W_12, W_13, W_14, and W, W, W, W. This is what your matrix is gonna look like if this is your W. Cause you need a new- you need a way for every one of those connections. Whereas if you're doing convolutions, it's much more efficient. Because there's this idea of local connectivity. So you have your h_1, h_2, h_3, h_4, X_1, X_2, X_3, X_4. Each hidden layer is only connected to, um, what's called its receptive field. Which is the inputs that the filter would be applied to, and in this case, we will only have three weights. Because we just have this sliding window, and you apply it at each step. Um, so A, gives you local connectivity. Um, B, it's much more efficient in terms of parameters. You- you're sharing the same parameters at different places in the input. Um, and it gives you this cool intuition of sliding around in the input. So it's like, I have my filter of three things, and it gives you this good intuition of- if- if a- let's say this is- let say this is negative 1, this is 100, this is 1, and this is 3. Then with this, you can- you can interpret this as my filter really likes whatever pattern is going on in these three inputs. Um, and it doesn't like so much all the other patterns that it's picking up on. Um, and so you- yeah, you have this nice interpretation for the filters. Um, in general what this looks like, is- so in practice, instead of one-dimensional two-dimensional, they're very high-dimensional volumes. Um, and so your filter is going to be a cube in the input space, and you're sliding it around, and applying it at every place it can fit in this input, and then, the reason why the output is also a volume, is because, um, you have multiple filters. So over here for example, this blue filter is- when you slide it around an input, it's gonna give you this, uh, like plane of outputs. But then you have a second filter, this green filter, that you can also slide around the input, and that's gonna give you a second dimension to your hidden states. Um, so Andrej Karpathy has this nice demo, where basically we have- so we have a three-dimensional input, and we have two filters which are like, you can think of as little cubes, and it's sliding these cubes around the input, and every application gives you one output in this, um, like three-dimensional output volume. So this is, uh, the same picture as before where you're sliding around cubes in order to fill in um, you could say like layers of the output. Sliding around cubes to fill in these layers of the output. Another thing people do is max pooling. So remember that interpretation of a filter as a, as like a pattern detector. Um, what this is saying is you take a region in your input. So you, you run your filters of the input. Get your, uh, like preliminary output, and then you look at regions in the output and take the maximum activation and carry that on to successive layers. An intuition there is that instead of, um, that you're looking- you're searching for a pattern in a region of the input. And it's also helpful because remember at the end of the day we want to do classification or regression or something. We wanna get this thing down to like a very small number of, um, numbers, and if we have this huge high dimensional volume, then any way we can reduce its size is good. Um, so this is an example of how these things work is, is- it's they're pretty straightforward basically so you, you just have your convolutional layers, you stack them all up, every once in awhile you have some pooling and you go down and down in dimensionality until you eventually get down to, um, er, like a distribution over possible labels. And this ties into what I was saying before about that Lego block analogy because this, this entire network is built up of one, two, three, four different Lego blocks in a way, and it's basically just stacking them on top of each other and composing them up in order to get a image classifier. So I'm going to talk about three case studies of CNN architectures. So the first one, um, is AlexNet. So this was that one that did really well in the ImageNet competition and really brought CNN's to the mainstream for computer vision. Um, basically it was just a really big neural network. Um, um, one trick they did was they used ReLU's instead of, um. So the sigmoid that we've learned about I think we've. The sigmoid that we've learned about, um, this is an activation function and it's gonna look something like this. And what they did was instead they use the ReLU, um, which looks a little more like that, and in practice it turns out to be a little easier to train and use. Um, the next one is VGGNet, um, which did on ImageNet a couple years later. Basically, it's, um, it's very similar, it's just a CNN. I think the thing to note about this one is that it's very uniform, um, so it was 16 layers and there's nothing fancy in it. It was just a bunch of these Lego blocks stacked up. Um, the entire network is pretty much, uh, like you- just by looking at this picture you could probably re-implement their network. Um, something else to note about it is it started this trend of deeper, of, of kind of like tall and skinny networks. So you'll notice that there's a lot of layers, but each layer is, uh, very thin. And residual networks or ResNets, um, kinda takes that to the nth degree. So the idea with a ResNet is, um, so most of the time you take your input, you pass it through a matrix state and output. Um, if you add in your input again, then that is very helpful because it makes it easy for the model to learn the identity function, and so you can give the model the capacity for like 100 layers. But if you add in these residual connections which is what you call it when you basically just like add in that, add in x, um, then allows the model to skip a layer if it decides that that's what's best for itself. You just set W to 0, that's what you would do. Um, so it also helps with training. Um, back-propagation, if you take the derivative of the loss with respect to your input, uh, that derivative is, is just going to be 1, um, for this part of the sum. And so it gives- in a way you could think of it as it, as it gives the- it gives the error signal kinda like a highway through the network, um, and it allows the gradients to propagate much deeper into these large neural networks. Um, and so ResNet got a 3.6 % error on ImageNet. If you remember the AlexNet would- it blew everyone off the water and it got like 15%. Uh, I think this is much better than human performance and, um, it will come up later when we talk about- this idea of like residual connections will come up later when we talk about recurrent neural networks. So just to summarize, uh, convolutional neural networks are often applied in image classification. Um, the key idea is that there is- you have these filters which you are sliding around the input and that lets you one, um, have- it does, uh, kind of this like idea of local connectivity. So as a space in the ou- in the output only depends on a small patch of the input instead of the entire input. And then second, um, it's- the parameters are shared. Um, depth has turned out to really matter for these networks and I think to this day it's like people, it's like every day there's just a deeper network that's coming out and people haven't really found a bound to depth I guess, yeah. What's the best way to design one of these networks? Is it trial and error, um, but effectively you're trying to get out of some results or is there any intuition as to how many layers, what the layer should be and so forth? Yeah. So the question was how to design these things since there's- they seem so arbitrary. [LAUGHTER]. And yeah it is, it is really arbitrary. I think, um, I think- so there's a few different ways. So first, um, you start with something. Okay. So first, you would start with something that sounds reasonable, um, and then you would do some kind of like a grid search or you would do, um, now there's a literature on meta-learning which is where like you have a model, decide what your model looks like or- but in most cases you just kind of like hand-tune it, you're like oh if I add a layer does it go up or down? Um, second, you look at the literature and say okay, someone else solved a similar problem to me and they used network X, Y and Z and so I'm gonna start with that and then start fiddling from there. Um, and then third is to literally take that network that's been pre-trained on an- a task and then apply it to yours. Uh, we'll talk about it later but pre-training networks and applying that to your task has shown to be very helpful. Okay. So now we're gonna talk about recurrent neural networks. The idea here is that you're modeling sequences of input. Um, this could be things like text or sentences. It could also be things like time series data or financial data. Um, and the recurrent neural network is something where the input, um, it feeds its past inputs into itself, so it has time dependencies. So for example, we have this very simple recurrent neural network here. Um, it is a function with one matrix and it takes as arguments a past hidden state and a current input, and then it predicts the next hidden state. So this, this is what it looks like if you were to write it in code or something. This is what the actual network looks like. So there's an input and you feed that into your function as well as your current state, and it just kind of loops on itself. Most of the time, people talk about this third perspective which is, uh, taking kind of this network and I- and kind of like unraveling it across time. I guess you can say unfolding it across time, um, where every time-step you have an input and you have a state, and then you have your function which carries you to the next state. Yeah. I'm just curious, how does it differ from having your original weight and updating that weight because it sounds like that- that's a similar analogy. [inaudible] we have the previous state. Yeah. But how is it compared to our machine [inaudible] classifier, so we had the previous weights and we're just updating the previous weights. Oh I see. So the question was, what's the difference between this and the setting before when we had- when we were, like, stochastic gradient descent where we were updating our weights sequentially. Yeah. So, um, that is- that is an interesting question. So the inf- difference is that before, for SGD, that was for, um, it was sequential in the training whereas this is sequential in the inference. So each- so you do- you feed in ten inputs, let's say, ten timesteps as inputs. And then after all that time, then you back-propagate once for all those time steps. So to make that more clear. So for SGD, it's like you have x_1, y_1, x_2, y_2, and you use this to update w, right? So you update w, and then you update w. And, yeah, that's- that's an interesting observation that there's this kinda like time-dependency. Um, but there's no time within the data itself. For, for the recurrent setting, it's, it's more like this. It's like, um, if, uh, which marker is better? So it's more like this. It's more like you have x_1_1, x_1_2, x_1_3, and y, and then x_2_1, x_2_2, and x_2_3 and y. And then you use this to update w. And in this setting, when we talk about time or temporal, like, when we talk about a sequence, we're talking about a sequence here in the data, not necessarily in the learning. Yeah. Okay. So to make- this a more concrete example. We're gonna talk about a neural network language model. So this is a model that is in charge of sucking in a sentence and predicting what is the most likely word that would come next in the sentence. So we're- so each input, we call x, and our hidden states, we call hs. And the way this works is we have some function that takes x_1 and it encodes it into our hidden state. And then we have a second function that takes the hidden state and decodes it into the next input. We continue by taking both the next both x_2, our next input, and h_1, our previous hidden state, and then we use that to create a new hidden encoding. And then, we take that new hidden encoding and decode it into our next input. And we just rinse and repeat, each time we take the current input and the previous hidden state to first create an encoding and then second predict our next input. So there's those two steps. The cool thing about this to note is that this now, we're building up vectors, these h_is, and that's exactly what we're looking for. It's a vector that, in some way, captures the meaning or a summary of all the x- all the inputs that we fed up until that time step. So now, we have a vector which compresses all those inputs into one vector. So to make this very concrete, one way you could build this thing is by basically, each of these arrows, you stick a matrix in there. So our encode function would take the input, the x_t, and multiply it by a matrix in order to get a vector and then it would take the previous hidden state h_t minus 1 and multiply that by different a matrix to get a new vector, and you add those vectors, and that gives you a vector which is your new hidden state. And decode is the same thing, you take your hidden state, pass it through a matrix to a vector, and then, um, send that through a Softmax to turn the vector of logits into a distribution of probabilities. In general though, there's this problem with recurrent neural networks. So if there is a short dependency which mean the input and the output, if an output depends on a recent input, then the path through this network is very short, right? So it's easy for the gradients to reach where it needs to reach in order to train the network properly. But if there's a very long dependency, then the gradients have a- have- they have difficulty getting all the way through. You- so, uh, if you remember, we talked about, um, gradient descent as a credit-assignment problem where the gradient is in some ways, like saying, how much, um, okay so if I change the in- if I change this input by a small- if I perturbed by a small amount, how much will the output change? That's- in what's sense, that what the gradient is saying. But if the input and output are super far away from each other, then it's very difficult to compute how small perturbations the input would affect your output. And the reason for that, we won't get into it so much, but basically if you want to compute the gradient, then, what you have to do is you have to trace the entire path of that dependency and you'd look at all the partial derivatives along that path, and you multiply them all up. And so the problem is is that if the path is very long, then you're multiplying a lot of numbers, and so if your numbers are less than 1, then the- then the product, the overall product is gonna get really small and really fast, right? And if the numbers are bigger than 1, then that product is going to blow up really quickly. And so that is a problem because it means your gradients are going to be tiny and no learning signal, or they're going to be way too big and you're going to just like shoot into some crazy direction that, in practice, will like blow up your experiments and nothing will work. [LAUGHTER]. So it's a problem. Um, So for the, the good thing is that for the exploding gradient problem that's not so bad, there's a quick fix. What people do is, is, what they do is what's called clipping gradients. So you'll specify some norm. You'll be like, any gradient with a norm bigger than 2, I'm going to clamp off at 2. So if your gradients explode and they go to 10 million, you're gonna say, "Okay, that's bigger than two. So it was- it wasn't a million, it was actually two," and you go from there [LAUGHTER]. But for the vanishing gradient problem, here's this cool idea. The long short-term memory cell, uh, which is similar to a recurrent neural network, um, but it has two hidden states. And, um, so this is kind of a wall of equations, but I think the important thing to note is that you're doing this, this is basically like your input in a way, and this is kinda like your previous hidden state. And so what's going on here is you're doing an additive combination, you're taking your input and you're adding in your previous hidden state, very similarly to those residual connections in the ResNet. And so this- because you're adding in your previous state, it's kinda like adding in your previous input, I guess you could say, and it allow- it gives the gradients, uh, kind of like of a highway to very easily go back in time. Um, there's another perspective on this. So this picture that in- the, uh, notation is different but, um, I think the thing to note here is that this- so those, those are, are you could say, are hidden states in this network. Or I guess- so in LSTM- so you co- I guess, you could say that there's two hidden states. That's what people say. Um, so you have one hidden state which is your HT and that's the state that you expose to the world. If you say, "My LSTM is gonna have the same API as my RNN, then this would be like the equivalent of that hidden state that we have for RNN. But then, you also have this internal hidden state- the C state, that you never expose to the world. And so in this picture, um, the- sorry the notation is a little confusing but the o in this picture corresponds to the h in the previous picture. So this is the hidden state that you are exposing to the world. And then the s corresponds to c, so this is your internal hidden state. And the thing to note about this picture is that s is just zipping around on what's called the constant error carousel. And it's always internal and it's zipping around this thing in a loop. And so, um, what it ends up doing in practice is it ends up lear- like that- it's a vector and it contains very long-term information that's useful for the network over many many time steps. So if you, if you poke around at individual dimensions of that state, then you an- then you can find these long term things being learned. So for example, Andrej Karpathy has a great blog post. Um, you find networks that- you find units that track the length of the sentence, you find units that track syntactic cues like quotes or brackets. But in general, you find a lot of things that are just not easily interpretable. Uh. So one last cool. I guess, idea that people have used with these recurrent neural networks is sequence to sequence models, like machine translation. Um, which is where you have two sequences. You have an input sequence, and an output sequence. And you want to suck in your input sequence and then spit out your output sequence. And you do this with what's called the encoder decoder paradigm. You encode your sequence, um, by giving it to your RNN, and that gives you a one vector which is encoding or compression of that input. And then you decode your sequence by spitting out, um, your outputs just like we were talking about before with the language model. And more recently, there's these attention-based models which are very helpful in the case where there's long sequences. So if you look back here, x_1, x_1, x_2, x_3 are all getting compressed into a single vector. Um, well if you have a really long paragraph, maybe it's hard to shove that into your, you know, 200 dimensional vector. It's hard to capture the depth of all that language with just a bunch of numbers. Um, and so the idea behind attention is to, is to look back. Um, so th- the way attention works is, um, at a very high level is. So if you have your inputs, we have x_1, x_2, and x_3. And we've run our RNN over these inputs. And so we have three hidden state vectors, h_1, h_2, and h_3. And now we're decoding. And so we have, we have our RNN decoder that has some hidden state. We'll call it s_1. Um, what happens is you compare your current hidden state with all of the states in your encoder, and you compute a number that says how much do I like this state. So maybe, maybe it like really, really likes this number, and it's not too happy about this num- this vector and it doesn't like this vector. What it does is it, is it uses these scores to turn them into a distribution. A probability distribution, that again says how much, how much do I as s_1 like each of these vectors. And then what you do, is you compute a weighted average of these hidden vectors [NOISE] where the weights come from this distribution. And what this does is it serves two purposes. So first, um, and there, there is another way of kind of writing this down on the slides, um. But I think this serves two purposes. So first of all, it gives you some interpretation. So every time step you can see what parts of the input is it focusing on, what parts of the inputs have a lot of probability mass on them. And then second, what it lets you do, is it lets you, ah, it kind of releases the model from the pressure of having to put the entire input sequence into a single vector. Now, what it can do is it can go- dynamically go back and retrieve the information it needs. Um, and then more recently, uh, there's what's called transformer models, um, which are, which do away entirely with the RNN aspect, it's just attention. And with a transformer, you have your hidden states, and instead of, instead of having some kind of decoder hidden state that you're comparing to the others. What you're doing is you just- you select each hidden state and you compare h_1 to all the other h's including itself to get a number of how much, um, h_1 likes those other h's. And then you compute your weighted average of all of these hidden states, and that becomes [NOISE] your next layer. Um, so I, I would recommend taking 224, and if you're interested in this topic, um, transformers are very cool and they've recently become, I guess you could say like the new LSTM. Um, so from these attention distributions you get cool interpretable maps, like in translation. So this is an attention distribution and it points at words that are correspondent. So uh, like economic, um, corresponds to economic and, and you can see that in the distribution. Um, they also do this in computer vision. Um, you can highlight areas of a picture. Yeah, so just to summarize. So recurrent neural networks, um, you can throw a sequence at them, and they'll give you a vector. Um, there's this intuition, uh, that they are processing inputs sequentially kind of like a for loop. But they have a problem with training where the gradients either blow up or they shrink very small. So LSTMs are one way of mitigating this problem. Um, but they're not perfect, they still have to shove information into one vector. And so the way people get around this is with attention based models, where you dynamically go back into your input and retrieve the information you need as you need it. [NOISE] Um, so now we're gonna talk about, um, unsupervised learning. Um, so like I said before, neural networks, we got them to work well recently, and a lot of that is just because they need a lot of data. Um, but if you're a smaller lab or if you don't have enough money to basically pay for a data set or even if it's a hard problem that just there isn't a lot of data for it. Um, there's a lot of cases where there isn't enough data to train these very, very large models with millions, billions of parameters. Um, but on the other hand, there's tons of unlabeled data laying around. And you can download the whole Internet if you want. Um, and there's kind of this real inspiration from us, as human beings. Like, we are never given labeled data sets of, of what foods are edible and what foods are not edible. Right. You just kind of, you absorb experiences from the world and then you use that to inform your future experiences, and you're able to kind of like reason about it and make decisions. Um, so, uh, I'm gonna turn off my, okay. So uh, yeah, so the first, I guess thing that we're gonna get into is auto-encoders. So the idea behind auto-encoders is that if you have some information and you try to learn a compressed representation of that information that allows you to reconstruct it, then presumably, um, you've done something useful. So in neural network speak the way that works is you, ah, give it some kind of a vector, and use pass that through an encoder, um, which gives you a hidden vector. And then you pass that hidden vector through a decoder which you use to reconstruct your input. And the implicit loss in most cases is you wanna take the difference. Basically, you want your reconstructed input and the original input to be very similar. Um, so just to motivate this, um, this isn't deep learning but principal component analysis is, could be viewed as one of these encoder, decoders. And the idea behind principal component analysis is you wanna come up with a matrix U, um, which can be used to both encode and decode a vector. So you multiply x by U to give you a hidden vector or a new representation of your data. But then if you transpose U and multiply it by your hidden vector then that should give you something as close as possible to your original data. Um, so but there's a problem. So as we- if we have a hidden vector with a bunch of units, then it's not gonna learn anything, right. It's just gonna learn how to copy inputs into outputs. Um, and so a lot of research on auto-encoding and this kind of unsupervised learning is about how to control the complexity, and, and make the model robust enough to generate useful representations instead of just copying. Um, so the first pass at that, can be using lawnmower transformations. Um, so you would do something like the lo- logistic or the Sigmoid loss. And that means that the problem can't be solved anymore by just copying into the output. And so you're gonna have to actually learn something useful. [NOISE] Um, another way of doing it is by corrupting the input. So you have your input and you noise it. Maybe you drop out some numbers from it. Maybe you perturb some numbers of it. Maybe you add in, maybe draw from like a Gaussian and add that to your input, and then you pass that through. Um, so yeah, so you could drop. So if your vector is 1, 2, 3, 4, you could drop out 1 and 4, and just set them to 0, or you could slightly perturb these numbers so that they're close to their original but, um, different, not the exact same. And then the idea is that after you pass this encoded input through both your- after you pass this corrupted input through both your encoder and decoder, then the output, the eventual x-hat should be very close to your original uncorrupted input. [NOISE] Um. Yeah. So another is a variational encoder which has a, um, cool comp probabilistic interpretation. Um, so you could think of it as kind of a Bayesian network. Um, I- I think maybe this is more useful to look at. So you have an encoder and a decoder and they are both modeling, um, I guess probability distributions. So what this is saying is I want to encode x into a distribution over h's. And you learn a function which is in charge of doing that. And then you want to specify some conditions. First you say, "Okay, I wanna make x recoverable from my h distribution." And then second this is a term that kind of, um, it prevents h from being degenerate. So maybe a good way of thinking about this is instead of- so our traditional autoencoder would take my input and it would map it. It would send it through some kind of encoder and map it into a hidden vector. Send that through a decoder and that would reconstruct my input. Whereas a variational autoencoder is gonna take my input and it's gonna map that into a distribution over possible h's. And then what I'm gonna do is I'm going to sample from this distribution, pass that through my decoder and produce my reconstructed input. Um, and the nice thing about this is that since this is a distribution instead of a vector, you've imposed some structure on the space. So points that are close together in this space should map to similar x-hats. And then similarly as you move through this space you shou- you should be able to gradually transition from one I guess reconstructed input to a second. So for example, there's these cool experiments in computer vision where they'll say, up here, this is gonna give me like a chair or something. Um, and then down here it's gonna give me like a table. And then if- if I move from one to the other, then- and you constantly decode then it'll like gradually morph into the table. Um, it's really cool. Um, okay, and then- so then the last method of, uh, of unsupervised learning that we're going to talk about is motivated by this task. So there's this dataset called SQuAD, um, which is about 100,000 examples where each example consists of a paragraph, and then a bunch of questions, uh, like multiple choice questions based on that paragraph. Um, so the problem here is that there's only 100,000 examples. And really the intelligence that this task is trying to get at is just can you read a text and understand it, um, which is more general and is- is captured by more data than just these 100,000. In particular, it's captured by just all the texts that you could possibly read. So there's billions of words on Wikipedia, on Google. You can just crawl the web and download it. And if somehow you could leverage that maybe that would be helpful for your reading comprehension. Um, and that is just a perfect case of this- of this setting where we have tons of unlabeled data, very small amount of data. Um, so recently the NLP community has come up with this idea called BERT. Um, well, there's actually not just BERT, but there's a lot of people who are doing similar things but BERT is the example we're talking about. With BERT what you do is you take a sentence and then you mask out some of the tokens in the input. And then you train a model to fill in those tokens. And they actually train the model on a bunch of things. So they trained it on token filling. And then they also would glue two sentences together and- and ask the model are these sentences, um, like would they be adjacent in the texts or not? Like do they make sense together or not? Um, but the idea is basically like have- give a bunch of unlabeled text to a model which is just going to kinda like manipulate that data in order to learn structure from it, um, without any explicit purpose other than just learning the structure. And they trained it on a bunch of data for a long time. And then what you can do is once- so BERT is actually a big- so we talked about transformers before and BERT is like a big transformer. So they trained this thing on a ton of unstructured texts, on just this word filling task for a long time. And then what they did was they took their pre-trained BERT and they took the- and they started feeding it questions from SQuAD. So they took questions and then they would glue on to it the paragraph, the context that they used to answer the question and then they would take whatever vectors are coming out of BERT, and they would pass that through a single matrix which basically predicts the answer for that SQuAD question. Um, and it did really well. So this picture is right when BERT was released. And these are all of the state of the art, um, models for SQuAD and BERT, not only beat all the other models by a large margin, but also beat human performance. And, um, I guess the- the intuition- the intuition behind BERT was that by doing these seemingly trivial tasks like word filling and next sentence prediction, what you end up learning is you end up learning the vectors that are coming out of this are vectors that say like, what is the meaning of a word? Um, what is the meaning of a word in this context? What is the meaning of this sentence? And that meaning isn't like operationalized towards solving any task in particular, um, it's just like, in a very general sense, like what is- I'm going to imbue this model with an understanding of language and then once I have an understanding of language, I'm going to then apply it to my very targeted downstream task. Um, and that is- that is kind of the principle behind unsupervised learning. So you kinda like make up these almost trivial prediction tasks, uh, just to manipulate data and learn structure from it, understand language, understand what a picture is. And then what you do is then you fine tune it on the very small amount of label data that you have. Um, and that's kinda what the current state of the art is in a lot of fields is basically just doing more and more unsupervised pre-training with bigger, bigger models and bigger, bigger data. Um, and the field really hasn't found like a limit to this yet. And it'll be interesting to see how far it goes. Um, so I'm going to skip those slides but just to kind of wrap things up, um, recently I guess the biggest things that people have gotten to get neural networks working is; one, better optimization algorithms. So we have these adaptive algorithms that are not as, um, I would say like obtuse as SGD. It doesn't have to move in this- by the same amount every time. You have a lot of tricks, like, uh, you know, fine tuning, unsupervised learning, clipping the gradients, batch norm. Um, we have better hardware. We have better data and that allows us to experiment more and- and, um, train larger models faster. Yeah. We're waiting a long time. But I think- I think one of- maybe one of the problems with the field is that the theory is- is in a lot of ways lacking. Um, and we don't know exactly why neural networks work well and why they're able to learn good functions despite having a very difficult, um, like optimization surface. Um, yeah. So just to summarize. We- we talked about a lot of different building blocks. We talked about how to leverage spatial structure with convolutional neural networks. We talked about how to feed sequences into recurrent neural networks and transformers, and LSTMs. We talked about, um, you know, the sequence to sequence paradigm for machine translation and unsupervised learning methods that help you kinda like jumpstart your downstream applications. Um, and I think the big takeaway here is that in- in some ways the big advantage of a neural network is that they are compositional. So you- it's like a- it's like Legos. You take- they take an input and they turn it into a vector. And then once you have a vector, you can start combining these things in very flexible ways. And so in a lot of ways designing these things is a lot like putting together a Lego set. You have your building blocks in LSTM, attention and coding, and you can decide how to- it's like, "Oh, I wanna run this LSTM here and this LSTM here. And then I want this one to attend over this one. And then I'm going to concatenate the result with the output of this CNN." Um, and because of- because of I guess you could say like the magic about propagation, you can combine these things. Um, and I think even more generally, it allows you as a programmer to instead of make a program for solving a problem, it allows you to make this scaffolding, um, that allows a computer to teach itself how to solve the problem. So, um, instead of defining the function, you want the software to learn. You define a very broad family of functions that the software is allowed to learn. And then you let it go and run off and find the best match within that. Uh, yeah. So those are all the things we're talking about today. Uh, but, um, I hope you all have a good Thanksgiving break. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Machine_Learning_1_Linear_Classifiers_SGD_Stanford_CS221_AI_Autumn_2019.txt | Okay. So let's, uh, get started with the actual, ah, technical content. So remember from last time, we gave an overview of the class. We talked about different types of models that we're gonna explore: reflex models, state-based models, variable based models, and logic models which we'll see throughout the course. But underlying all of this is, is, you know, machine learning. Because machine learning is what allows you to take data and, um, tune the parameters of the model, so you don't have to, ah, work as hard designing the model. Um, so in this lecture, I'm gonna start with the simplest of the models, the reflex based models, um, and show how machine learning can be applied to these type of models. And throughout the class, ah, we're going to talk about different types of models and how learning will help with those as well. So there's gonna be three parts, we're gonna talk about linear predictors, um, which includes classification regression, um, loss minimization which is basically stating an objective function of how you, ah, want to train your machine learning model, and then stochastic gradient descent, which is an algorithm that allows you to actually, ah, do the work. So let's start with, ah, perhaps the most, um, cliched example of, uh, you know, machine learning. So you have- we wanted to do spam classification. So the input is x, um, an email message. Um, and you wanna know whether an email message is spam or not spam. Um, so we're gonna denote the output of the classifier to be Y which is in this case, either spam or not spam. And our goal is to, ah, produce a predictor F, right? So a predictor in general is going to be a- a function that maps some input x to some output y. In this case, it's gonna take an email message and map it to whether the email message is spam or not. Okay. So there- there's many types of prediction problems, um, binary classification is the simplest one where the output is one of two, um, possibilities either yes or no. And we're gonna usually denote this as plus 1 or minus 1, sometimes you'll also see 1 and 0. Um, there's regression where you're trying to predict a numerical value, for example, let's say housing price. Um, there's a multi-class classification where Y is, ah, not just two items but possibly, um, 100 items, maybe cat, dog, truck, tree, and different kind of image categories. Um, there's ranking where the output, um, is a permutation of the input, this can be useful. For example, if the input is a set of, um, articles, or products, or webpages, and you want to rank them in some order to show to a user. Um, structured prediction is where Y, ah, the output is an object that is much more complicated. Um, perhaps, it's a whole sentence or even an image. And it's something that you have to kind of construct, you have to build this thing from scratch, it's not just a labeling. Um, and there's many more types of prediction problems. Um, but underlying all of this, you know, whenever someone says I'm gonna do machine learning. The first question you should ask is, okay what's the data? Because without data, there's no learning. So we're gonna call an example. Um, x, y pair is something that specifies what the output should be when the input is x, okay? And a training data or a set of examples, the training set is going to be simply a list or a multiset of, er, examples. So you can think about this as a partial specification of behavior. So remember, we're trying to design a system that has certain- certain types of behaviors, and we're gonna show you examples of what that sum should do. If I have some email message that has CS221 then it's not spam but if it has, um, lots of, ah, dollar signs then there might, um, um, be spam. Um, and, ah- so remember this is not a false specification behavior. These, ah, ten examples or even a million examples might not tell you what exactly this function is supposed to do. It's just examples of, ah, what the function could do on those particular examples. Okay. So once you have this data, so we're gonna use D_train to denote, ah, the data set. Remember, it's a set of input output pairs. Um, we're going to, ah, push this into a learning algorithm or a learner. And what is the learning algorithm is gonna produce? It's gonna produce a predictor. So predictors are F and the predictor remember is what? It's actually itself a function that, um, takes an input x and maps it to an output y. Okay? So there's kind of two levels here. And you can understand this in terms of the, uh, modeling inferences of a learning paradigm. So modeling is about the question of what should the types of predictors after you should consider are. Ah, inference is about how do you compute y given x? And learning is about how you take data and produce a predictor so that you can do inference? Okay. Any questions about this so far? [NOISE] So this is pretty high level and abstract and generic right now, and this is kinda, kind of on purpose because I wanna highlight how, um, general machine learning is before going into the specifics of, uh, linear predictors, right? So this is an abstract framework. Okay. So let's dig in a little bit to this actual, um, an actual problem. Um, so just to simplify, ah, the email problem, let's, eh, consider the task of, um, predicting whether a string is an email address or not. Okay. Um, so the input is an em-, ah, is a string and, ah, the output is- it's a binary classification problem, it's either 1 if it's an email or minus 1 if it's not, that's what you want. Um, um, so the first step of, um, doing linear prediction is, um, known as feature extraction. And the question you should ask yourself is, what properties of the input x might be relevant for predicting the output y? Right, so I say, I really highlighted might be, right? At this point, you're not trying to encode the actual set of rules that solves a problem, that would involve no learning, and that would just be trying to do it directly. But instead of- for learning you're kind of taking a, um, you know, a more of a backseat and you're saying, "Well, here are some hints that could help you." Okay. Ah, so formally, a feature extractor takes an input and outputs a set of feature name, feature value pairs, right? So I'll go through an example here. So if I have abc@gmail.com, what are the properties that might be useful for determining whether a string is an email address or not? Well, you might consider the length of the string, if it's greater than 10, maybe long strings are less likely to be email addresses than shorter ones. Um, and here, the feature name is length greater than 10. So that's just kind of a label of that feature, and the value of that feature is 1, ah, representing it's true. So it will be 0, if it's false. Here's another feature, the fraction of alphanumeric characters, right? So that happens to be 0.85 which is the number. Um, there might be features that test for a particular, um, you know, letters for example, that it doesn't contain an "at" sign or that has a, you know, feature value of 1 because there is an "at" sign, endsWith.com is one, endsWith.org is a 0 because that's not true. So, um, and there you could have many, many more features, ah, and we'll talk more about features on next time. But the point is that you have a set of properties, you're kind of distilling down this input which is could be a string, or could be an image, or could be something more complicated into kind of a um, you know, ground-up fashion that later, we'll see how a machine learning algorithm can take advantage of. Okay. So you have this, ah, feature vector which has- is a list of feature values and their associated names or labels. Okay. But later, we'll see that the- the names don't matter to the learning algorithm. So actually, what you should also think about the feature vector is simply a list of numbers, and just kind of on the side make a note that all this, you know. position number three corresponds to contains "@" and so on. Right, so I've distilled the- the email address abc@gmail.com into the list of numbers 0- or 1, 0.85, 1, 1, 0. Okay. So that's feature extraction. It's kind of distilling complex objects into lists of numbers which we'll see is what the kind of the lingua franca of these machine learning algorithms is. Okay. So I'm gonna write some concepts on a board. There's gonna be a bunch of, um, concepts I'm going to introduce, and I'll just keep them up on the board for reference. So feature vector is again an important notion and it's denoted Phi, um, of x on input. So Phi itself- sometimes, you think about it, er, you call it the feature map which takes an input and returns, um, a vector, and this notation means that returns in general, ah, d-dimensional vector, so a list of d numbers. And, um, the components of this feature vector we can write down as Phi_1, Phi_2, all the way to Phi_d of x. Okay. So this notation is, eh, you know, convenient, um, because we're gonna start shifting our focus from thinking about the features as properties of input to features as kind of mathematical objects. So in particular, Phi of x is a point in a high-dimensional space. So if you had two features, that would be a point in two-dimensional space, but in general, you might have a million features, so that's a feature, ah, it's a point enough, a hundred- ah, uh, million dimensional space. So, you know, it might be hard to think about that space, but well, we'll see how we can, you know, deal with that in a later in a, in a bit. Okay. So- so that's a feature vector, you take an input and return a list of numbers. Okay. Um, and now, the second piece is a weight vector. So let me write down a weight vector. [NOISE] So a weight vector is going to be noted W. Um, and this is also, uh, a list of D numbers. It's a point in a D-dimensional space but we're gonna interpret it differently, as we'll see later. Okay. So- so a way to think about a weight vector is that, for each feature J. So for example, frac of Alpha, um, we're gonna have a real number WJ, that represents the contribution of that feature to the prediction. So this contribution is 0.6. So what does this 0.6 mean? So, so the way to think about this is that you have your weight vector and you have a feature vector of a particular input, and you want- the score of, uh, your prediction is going to be, uh, the dot product between the weight vector and the feature vector. Okay. So um, that's written W dot a phi of X um, which is um, written out as basically, looking at all the features and multiplying the feature of the value times the weight of that feature and summing up all those numbers. So for this example, it will be minus 1.2, that's the weight of the first feature, times 1, that's the feature value, plus 0.6 times 0.85 and so on. And then, you get this number of 4.51 which is- happens to be the score for this example. Question? So the feature extraction which is phi of X, is that, uh, supposed to be like an automated process or is it a part of manual extraction classification procedures? Yeah. So the question is, is the feature extraction manual or automatic? So uh, phi is going to be implemented as a function like encode, right. Um, you're going to write this function manually. But you know, the function itself is run automatically on examples. Um, later we'll see how you can actually learn features as well. So you can slowly start to do less of a manual effort but uh, we're going to hold off until, next time for that. Question? So we're talking about weight gaining and I know that in certain tests of regressions like, uh, the weights being, uh, a percentage change, [inaudible] weights to percentage change of the outcome it doesn't, it doesn't mean the sphere? Yeah. So the question is about interpretation of weights. Sometimes weights can have a more precise meaning. In general, um, you can, you can try to read the tea leaves but it I don't think there is maybe, uh, in general a mathematically precise thing you can say about the meaning of individual weights. But intuitively, and the intuition is important, is that you should think about each feature as you know, a little person that's going to make a vote on this prediction, right? So you're voting either plus, yay or nay? And the weight of a particular feature is- specifies both the direction level whether- if positive weight means that, um, that little person, um, is voting positive and negative weight means, that it's voting negative. The magnitude of that weight, is how strongly that little person feels about the prediction, right? So, you know, contains add as three because maybe like "@" signs generally do occur in email addresses but you know the fraction of alphanumeric characters, it's you know, less. So at that level, you can have some intuition but the precise numbers and y is 0.6 versus 0.5. Um, that's, um, you can't really say much about that. Yeah. Another question? Does, uh, [inaudible] [NOISE] is it the same dot product for deeper networks. They can feel like more weight vectors afterwards. It's still like it's, like just more than one products. [NOISE] So right now we're focusing on linear classifier. So the question is what happens if you have a neural net with more layers? Um, there's gonna be more dot products but there's also goin- it's not just adding more features. There's gonna be other uh, components which we'll get to in a later lecture. Yeah? Do the weights have to add up to a certain number or how do you normalize it, so the weights, like you have to change the score value [inaudible] . Yeah. So the question is, do the weights have to add up to something? Short answer is. No. There's obviously restricted settings, where you might want to normalize the weights or something but we're not gonna, you know, uh, consider that right now. Later, we'll see that the magnitude of weight does tell you, you know, something. Okay, so, so just to summarize it's important to note that the weight vectors, there's only one weight vector, right, you have to find one set of parameters for every- everybody. But the feature vector is per example. So for every input, you get a new feature vector and the dot product of those two weighted combination of features is the uh, is the score. Okay, so, so now let's try to put the pieces together and define, um, uh, of the actual predictor. All right, so remember we had this box with f in it, which takes x and returns y. So what is inside that box? Um, and I'm hopefully giving you some intuition. Let me go to a board and write, uh, a few more things. So the score, uh, remember is w dot phi of x. And this is just gonna be a number, um, and uh, the predictor. So linear predictor actually let me call this linear. To be more precise, it's a linear classifier not just a predictor. Classifier is just a predictor that does classification. Um, so a linear classifier um, denoted f of w. So f is where we're going to use, you know, predictors. W just means that this predictor depends on a particular set of weights. And this predictor is, uh, going to look at the score and return the sign of that score. So what is the sign? The sign looks at the score and says, is it a positive of a number? if it's positive then we're gonna return plus 1. If it's a negative number, I'm gonna return minus 1. And if it's 0 then you know, I don't care. You can return plus 1 if you want, it doesn't matter. Um, so what this is doing the remember the score is either, is a real number. So it's either gonna be kind of leaning towards um, you know, large value, large positive values or leaning towards, uh, s- large small- negative values. And the sign basically says, okay you gotta commit are you- which side are you on? Are you on the positive side or you on the negative side? And just kind of discretizes it. That's what the sign does. Okay. Okay, so, so let's look at a simple example because I think a lot of what I've seen before is kind of more the, uh, formal machinery behind and the math behind how it works but it's really useful to have some geometric intuition because then you can draw some pictures. Okay, so let's consider this, uh, case. So we have a weight vector which is 2, 1, 2 minus 1, and a feature vector which is a 2, 0, and another feature vector which is x0, 2 and 2, 4. Okay. So there's only two dimensions so I can try to draw them on a board. So let's try to do that. Okay, so here is a two-dimensional plot. Um, and let's draw the fea- the weight vector first. Okay so the weight vector is going to be at 2 minus 1. Okay. So that's this point. And the way to think about the weight vector is not the point. Um, but actually um, the, the, the vector going from the origin to that point for reasons that will become clear later. Okay so that's the, that's the weight. Okay. Um and then what about the other points so we have 2, 0, 0, 2. So 2, 0 is here, 0, 2 is here and 2, 4 is, uh, here. Right? Okay, so we have three points here. Okay, so, um, how do I think about what this weight vectors is, is doing? So just for just for reference remember the classifier is looking at the sign of W dot, uh, phi of x. Okay. Um, so let's try to do uh, classification on these three points. Okay so w is um, let me write it out formally, so 2, 1. Um, and this is 0, 2. So what's the score when I do W dot phi of x here? It's 4, right? Because this is um, uh, 2, 0, 0, 2 um, 2, 4. So this is just a dot product that's 4, um, and take the sign what's the sign of 4? One. Okay. So that means I'm going to label this point as a positive, right? Positive point, okay what about 0, 2? Actually, sorry, this is just be a minus 1, right? Okay. This is 2, minus 1. Okay, so if I take the dot product between this, I get minus 2 and then the sign of minus 2 is, is minus 1, okay, so that's a minus. Um, and what about this one? So what's the dot product there? It's gonna be 0. Okay. So, um, so this classifier will classify this point as a positive. This is a negative and this one I don't know. Okay. So we can fill in more points. Um, but, but, you know, does anyone see kind of um, maybe a more general pattern? I don't wanna have to fill in the entire board with classifications. Yeah? Orthogonal, everything to the right of it is positive and to the left of it is negative. Yeah so so let's try to draw the orthogonal. Uh, this needs to go through that line. Okay, [NOISE] okay, so let's draw the orthogonal. So this is a right angle. Okay. And, ah, what that gentleman said is that, the points- any point over here because it has acute angle width w, is going to be classified as positive. So all of this stuff is um, you know, positive, positive, positive, positive, positive, and everything over here because it's an obtuse angle with w is going to be negative, so everything over here is negative. And then, everything on this line is going to be 0. Okay? So, so I don't know. Okay, and this line is called, um, the decision boundary, which is the concept not just for linear classifiers, but whenever you have any sort of classifier the decision boundary is the separation between the regions of the space where the classification is positive versus negative. Okay? And in this case, um, it's, it's separate because uh, we have linear classifiers, the decision boundary is straight, and we're just separating the, the space into two halves. Um, if you were in three-dimensions, um, this vector would still be just a you know vector, but this decision, um, boundary would be a plane. So you can think about it as you know coming out of the board if you want, but I'm not gonna try to draw that. Um, and that's, that's kind of the geometric interpretation of how linear classifiers, ah, you know, work here. Question, yeah? It seems like your weight could be any values here. Right? Yeah. So we have one last [inaudible]. Yeah. [inaudible] . Yeah. So that's a good point. So the, the observation is that, no matter, if you scale this weight by 2, it's actually gonna still have the same decision boundaries. So the magnitude of the weight doesn't matter it's the direction that matters. Um, so this is true for just making a prediction. Um, when we look at learning, ah, the magnitude of the weight will matter because we're going to, you know, consider other more nuanced loss functions. Yeah. Okay. So let's move on. Any questions about linear predictors? So, so, far what we've done is, we haven't done any learning. Right. If you've ah, you know, noticed, we've just simply defined the set of predictors that we're interested in. So we have a feature vector, we have weight vectors, multiply them together, get a score and then you can send them through a sign function and you get these linear classifiers. Right. There, there's no specification of data yet. Okay. So now, let's actually turn to do some learning. So remember this framework, learning needs to take some data and return a predictor and our predictors are ah, specified by a weight vector. So you can equivalently think about the learning algorithm as outputting a weight vector if you want for linear classifiers. Um, and let's unpack the learner. So the learning algorithm is going to be based on optimization which we started ah, reviewing last lecture um, which separates ah, what you want to compute from how you want to compute it. So we're going to first define an optimization problem which specifies what properties we want a- a classifier to have in terms of the data, and then we're going to figure out how to actually optimize this. [NOISE] And this module is actually really really powerful um, and it allows people to go ahead and work on different types of criteria for and different types of models separately from the people who actually develop general purpose algorithms. Um, and this has served kind of the field of machinery quite well. Okay. So let's start with an optimization problem. So this is an important concept um, called a loss function and this is a super general idea that's using the machine learning and statistics. So a loss function takes a particular example x, y and a weight vector, um, and returns a number and this number represents how unhappy we would be if we used the predictor given by W to make a prediction on x when the correct output is y. Okay. So it's a little bit of a mouthful but, um, this basically is trying to characterize, you know, if you handed me a classifier, and I go on to this example and try to classify it, is it gonna get it right or is it gonna get it wrong? So high loss is bad ah, you don't wanna lose and low loss is good. So normally, zero loss is the- the best you can then hope for. Okay. So let's do figure out the loss function for binary classification here. Um, so just some notation, the correct label is, ah, denoted y and, um, the predicted label remember is um, the score, ah, sent through the sign function and that's going to give you some particular label. Um, and let's look at this example. So w equals 2 minus 1 phi of x equals ah, 2, 0 and y equals minus 1. Okay. So we already defined the score as, um, one example is a w dot phi of x which is, um, how co- confident we're predicting minu- plus 1. That's the way to, uh, you know, interpret this. Okay. So um, what's the score of this, for this particular example again? It's 4. Right. Um, which means I'm kind of, kinda positive that it's ah, you know, a plus 1. Yeah. Question? Ah, I was wondering, is the loss function generally 1-dimensional or, or the output of the loss function? Yeah. So the- the question is whether the output of loss function is usually a single number or not. Um, in most cases it is for basically all practical cases you should think about the loss functions outputting a single number. The inputs can be, you know, a crazy high-dimensional. Yeah. Why is it not 1-dimension? [NOISE] Um, there are cases where you might have multiple objectives that you're trying to optimize at once ah, but in this class it's always gonna be, you know, 1-dimensional. Like maybe you care about, you know, both time and space or accuracy but robustness or something. Sometimes you have multi-objective optimization. But that's way beyond the scope of this class. Okay. So we have a score. Um, and now we're gonna define a margin. So let me, um. Okay. So let's, let's actually do this. So we're talking about classification. I'm gonna sneak regression in a bit. So score is w dot phi of x. This is how confident we are about plus 1, um, and the margin is the score ah, times y. Um, and this relies on y being plus 1 or minus 1. So this might seem a little bit mysterious but let's try to, you know, decipher that, um here. Um, so in this example, the score is 4. So what's the margin? You multiply by minus 1. So the margin is, ah, minus 4. Right. And the margins interpretation is how correct we are. Right. So imagine the correct answer is ah, if, if the score in the margin had the same sign, then you're gonna get positive numbers and then the, the confident, the more confident you are then the more correct you are. Um, but if y is minus 1 and the score is positive, then the margin is gonna be negative which means that you're gonna be confidently wrong um, which is bad. [LAUGHTER] Okay. So just to to see if we kind of understand what's going on. Um, so when is a binary classifier making a mistake on a given example. Um, so I'm gonna ask for a kind of a show of hands. How many people think it's, it's when the margin is, uh, less than 0. Okay. I guess we can kind of stop there. [LAUGHTER] I used to do these online quizzes where it was anonymous but we're not doing that this year. Okay. So yes, the margin is less than 0. Um, when the margin is less than 0 that means y and the score are different signs which means that you're making a mistake. [NOISE] Okay. So now we have the notion of a margin. Let's define ah, something called the zero-one loss and it's called zero-one because it returns either a 0 or a 1. Okay. Very creative name. Um, so the loss function is simply, did you make a mistake or not? Okay. So this notation let's try to decipher a bit. So if f of x here is the prediction when the input is x, um, and not equal y is saying, did you make a mistake? So that's, think about it as a Boolean, and this one bracket is um, just notation. It's called an indicator function that takes a condition and returns either a 1 or 0. So if ah, if the, the condition is true, then it's gonna return a 1 and if the condition is false, it returns a 0. Okay. So all this is doing is basically returning a 1, if you made a mistake and 0, if you didn't make a mistake. Okay. And we can write that as follows. We can write that as um, the margin less or equal to 0. Right. Because pre- on the previous side of the margin is less than or equal to 0, then we've made a mistake and we should incur ah, a loss of 1 and if the margin is greater than 0, then we didn't make a mistake and we should incur a loss of 0. Okay. All right so, um, it will be useful to draw these loss functions, um, pictorially like this. Okay, so on the axi- x-axis here, we're going to show the margin, right? Remember the margin is how, uh, correct you are. And on the, uh, y-axis we're gonna show the- the loss function which is how much you're gonna suffer for it. Okay, so remember the margin, if the margin is positive, that means you're getting it right which means that the loss is 0. But if the margin is less than 0, that means you are getting it wrong and the loss is 1. Okay, so this is a 0-1 loss. That's, uh, thi- this thing- the visual that you should have in mind when you think about zero-one loss. Yeah. [NOISE] Like less than 0 because we are not defining the event actually 0 [inaudible] classified as correct. Yeah, so there is this kind of boundary condition of when ex- what happens exactly at 0 that I'm trying to sweep under the rug because it's not, um, terribly important. Um, here, it's less we go to 0 to be kind of on the safe side. So if you don't know you're also, uh, gonna get it wrong. Um, otherwise you could always just return 0 and then you, that, you don't want that. Okay. So is it- uh, any questions about, uh, kind of binary classification so far. So we've set up these linear predictors and I've defined the 0-1 loss as a way to capture, um, how unhappy we would be if we had a classifier that was, ah, operating on a particular data point x, y. So, um, just to- I'm gonna go on a little bit of a digression and talk about linear regression. Um, uh, um, [NOISE] and, and the reason I'm doing this is that loss minimization is such a powerful and general framework, and it go- transcends, you know, all of these, you know, linear classifiers, regression, setups. So I want to kind of emphasize over- the overall story. So I'm gonna give you a bunch of different examples, um, classification, linear regression side-by-side so we can actually see how they compare and hopefully, their- the common denominator will kind of emerge more, um, clearly from that. Okay, so we talked a little bit about linear regression in the last lecture, right? So linear regression in some sense is simpler than classification because if you have a linear, uh, uh, predictor, um, and you get the score w dot phi of x, it's already a real number. So in linear regression, you simply return that real number and you call that your prediction. Okay? Okay so now we- let's move towards defining our loss function. Um, so there's gonna be, uh, a concept that's gonna be useful, it's called the residual, um, which is, as- against kind of trying to capture how, uh, wrong you are. Um, so here is a particular linear, uh, predictor, um, linear regresser, um, and it's making predictions all along, you know, for different values of x. Um, and here's a data point of Phi of xy. Okay? So the residual is the difference between, um, the true value y and the predictor value y. Okay, um, and in particular it's the amount by which, um, the prediction is overshooting the, you know, target. Okay, so this is- this is a difference. Um, and if you square the [NOISE] difference you get something called, uh, the squared loss. [NOISE] So this is something we mentioned last lecture. Um, residual can be either negative or [NOISE] positive. Um, but errors, either, if you're very positive or very negative, that's bad and squaring them makes it so that you're gonna, you know, suffer equally for, um, errors in both, you know, directions. Okay, so the square loss is the residual squared. So let's do this kind of simple example. So here we have our weight vector 2 minus 1. The feature vector is 2, 0. What's the score? It's 4, y is minus 1. So, uh, the residual is 4 minus minus 1 which is 5 and, uh, 5 squared is 25. So the squared loss on this particular example is 25. Okay, so let's plot this. So just like we did it for a 0-1 loss. Let's see what this loss function looks like. So the, the horizontal axis here instead of being the margin is going to be this quantity, uh, for regression called the residual. Um, it's going to be the difference between the prediction and the, the true target. And I'm gonna plot the loss function. Um, and this loss function is just, you know, the squared function, right? So with- if the residual is 0, then the loss is 0. If as a residual grows in either direction, then I'm going to pay, uh, something for it. And it's a quadratic penalty which means that, um, it actually grows, you know, uh, pretty fast. So if I'm, you know, the residual is 10 then I'm paying 100. Okay, so, so that's the squared loss. Um, there's also another loss. I'll throw in here, um, called the absolute deviation loss. And this might actually be the last thought, if you didn't know about regression you might, uh, immediately come to. It's basically the absolute difference between the prediction and, um, the, the actual true target. [NOISE] Um, turns out the squared loss. The- there's a kind of a longer discussion about, you know, which loss function, um, you know, makes sense. The- the salient points here are that the absolute deviation loss is kind it has this kink here. Um, and so it's not smooth. Sometimes it makes it harder to optimize, um, but the squared loss also has this kind of thing that blows up, which means that it's, uh, uh, it really doesn't like having outliers or, uh, really large values because it's gonna, you- you're gonna pay a lot for it. Um, but at this level, just think about this as, you know, different losses. There's also something called a Huber loss which kind of, uh, um, combines both of these, is smooth, and also grows linearly instead of quadratically. Um, okay, so we have both classification and regression. We can define margins and residuals. We get either, uh, different loss functions out of it. Right? Um, and now we want to minimize the loss. Okay? Um, so it turns out that for one example and this is really easy, right? So if I- if I told you, okay, how do I minimize the loss here? Well, okay, it's 0. Done. [NOISE] So that- that's not super interesting. And this corresponds to the fact that, you know, if you have a classifier, you're just trying to fit one point, um, it's really not that hard. So that's kind of not the point. [NOISE] The point of machine learning is that you have to fit all of them. Remember, you only get one weight vector, you have all of these examples, you have a million examples. And you want to find one weight vector that kind of balances, uh, errors across all of them. And in general, you might not be able to achieve loss of 0, right? So tough luck . Life is hard. Ah, so you have to make trade-offs, you know, which examples are you going to kind of sacrifice for the good of other examples. And this is kind of actually a lot of where, you know, issues around fairness of machine learning actually come in because in cases where you can't actually make a prediction that's, you know, equally good for everyone. You know, how do you actually, you know, responsibly make these trade-offs. Um, but, you know, that's a- that's a broader topic. Let's just focus on trade-off defined by the simple sum over all the loss examples. So lets just say we want to minimize the average loss over all the examples. Okay, so once we have these loss functions, if you average [NOISE] over the training set, you get something which we're gonna call a train loss. Um, and that's a function of W. Right? So loss is on a particular example. Train loss is on the entire data set. [NOISE] Okay. So any questions about this, uh, so far? Okay. So there is this, uh, discussion about which regression loss to use, which I'm gonna skip. Um, you can feel free to read it in the notes if you're interested. The punchline is that if you want things that look like the mean square loss, if you want things that look like the median, use the absolute deviation loss. Um, but I'll skip that for now. Yeah? [inaudible] regression like this. Uh, when do people start thinking of regressions like in terms of loss minimization? Yeah. Uh, so regression has, Least Squares Regression is from like the early 1800s. Um, so it's been around for is- you know, kind of, you can call it the first machine learning that was ever done, um, if you- if you want, um, I guess the loss minimization framework is, um, it's hard to kind of pinpoint a particular point in time, you know, it's kind of not a terribly, uh, um, er, er, you know, it's not like, uh, um, you know, innovation in some sense. It's just more of a- at least right now it's kind of a pedagogical tool to organize, um, all the different methods that exist. Yeah. Say I'm training on mean and median. Do you mean that like, uh, in that particular training, training set, the median would be the [NOISE] highest accuracy and the most confident, whereas like with, uh, loss [inaudible] deviation would be the median instead of the mean? Yeah. So, um, I don't wanna get into these examples but, uh, bri- briefly, if you have three points that you- you can't exactly f- fit perfectly, um, you- if you use absolute deviation, then you're gonna find the median value. You're gonna basically predict the median value. And if you use the square loss, you're gonna predict the mean value. But, um, I'm happy to talk offline [NOISE] if- if you want. [NOISE] Okay. So what we've talked about so far is we have these wonderful linear predictors which are driven by feature vectors and weight vectors, and now we can define a bunch of different loss functions that capture, you know, how we care about, um, you know, regression and classification. And now let's try to actually do some real, uh, machine learning. How, how do you actually optimize these objectives? So remember the learner is going, uh, so now we've talked about the optimization problem which is minimizing the training loss. Um, we'll come back to that next lecture. Um, and then now we're gonna talk about optimization algorithm. Okay? So what is a optimization problem? Now, remember last time we said, okay, let's just abstract away from the details a little bit. Let's not worry about if it's, uh, the square loss or s- you know, some other loss. [NOISE] Um, let's just think about as a kind of abstract function. So one-dimension, the training loss might look something like this. You have a single weight and for each weight you have a number which is your loss on your training samples. [NOISE] Okay? And you want to find this point. So in two dimensions, um, it looks something like this. Yeah. Let me try and actually draw this because I think it'll, [NOISE] uh, be, um, useful in a bit to solve, let me pull this up. [NOISE] Okay. So in two dimensions, um, what optimization looks like is as follows. So I'm gonna- I'm now plotting, um, W_1 and W_2 which are the two components of this two-dimensional weight vector. For every point I have a weight vector and that value is gonna be the loss, the training loss. Um, and it's, er, you know, [NOISE] it's pretty standard in these settings to draw what are called level curves. Um, so let's do this. So each curve here is a ring of points where, uh, the function value is identical. So if you, uh, look at terrain maps, those are level curves. So you know, kind of what I'm talking about. So this is the minimum and as you kind of grow out you get larger and larger, um. Okay. I'll keep on doing this for a little bit. Okay. [NOISE] All right. Um. [NOISE] And, uh, the goal is to find the minimum. Okay. All right. So how are we gonna do this? So yeah, question. Assuming that there is a single minimum. Yeah, why am I assuming, uh, there is a single minimum. [NOISE] in general for arbitrary loss functions, there is not necessary a single minimum, I'm just doing this for simplicity. It turns out to be true for, um, you know, uh, many of these linear classifiers. [NOISE] Okay. So last time we talked about gradient descent, right? And the idea behind gradient descent is that well, I don't know where this is. So let's just start at 0, [NOISE] as good as any place. And what I'm gonna do at 0 is I'm gonna compute the gradient. So the gradient is this vector that's, uh, perpendicular to the level curves. So the gradient is gonna point in this direction. That says, hey, in this direction is where the function is increasing the most dramatically. Um, and gradient descent says, um, takes- goes in the opposite direction, right? Because remember we wanna minimize loss. Um, so I'm gonna go here. And, um, now I'll hopefully reduce my, uh, function value, not necessarily but, um, we hope that's- that's the case. Now, we compute, uh, the gradient [NOISE] again. The gradient says, um, you know, maybe it's pointing this way. So I go in that direction and maybe now it's, uh, pointing this way. And I keep on going. Um, this is a little bit made up. Um, but hopefully, eventually I get to the, um, the [NOISE] origin. And you know, I'm, I'm kind of simplifying things quite a bit here. So in- there's a whole field of optimization that studies exactly what kind of functions you can optimize and how gradient descent when it works and when it doesn't. Um, I'm just gonna kind of go through the mechanics now and defer the kind of the formal proofs of when this actually works until, um, later. Okay. So that's kind of the- the schema of how gradient descent works. So in code this looks like this. So initialize at 0 and then loop in some number of iterations, um, which let's- for simplicity just think there's a fixed number of iterations. And then, I'm gonna pick up my weights, compute the gradient, move in the opposite direction, and then there's gonna be a step size that, uh, tells me how fast I want to, you know, make progress. Okay? And we'll come back to, you know, uh, what, uh, the step size, uh, does later. Okay. So let's specialize it to a least squares, uh, regression. So we kind of did this last week, but, um, just to kind of review, um. So the training loss for least squares regression is this. So remember it's an average over the loss of individual examples, and the loss of a particular example is the residual squared. So that's this expression. Um, and then all we have to do is compute the gradient. And you know, if you remember your calculus, it's just I've used the chain rule. So this two comes down here. You have the, um, you know, the residual times the derivative of what's inside here and the gradient with respect to W is, uh, phi of x. Okay. So last time we did this in Python in 1-dimension. So 1-dimension, and hopefully all of you should feel comfortable doing this because this is just kind of basic, um, calculus. Um, here we have w is a vector. So, uh, we're not taking derivatives but we're taking gradients. Um, so there's, you know, some things to be, uh, wary of but in this case it's often kind of useful to double-check that. Well, um, the gradient version actually matches, uh, the, the single-dimensional version as well because last time remember we have the x out here. Um, and one thing to note here is that, um, there's a prediction minus target, and that's the residual. So the gradient is driven by, um, you know, kind of this quantity. So if the prediction equals the target, uh, what's the gradient? It's going to be 0 which is kind of what you want. If you're already getting the answer correct, then you shouldn't want to move your, uh, your weights, right? So often you know we can do things in the abstract and everything will work. But you know it's, it's often a good idea to write down some objective functions, take the gradient and see if gradient descent on using these gradients that you computed is kind of a sensible thing because there's kind of many layers you can understand and get intuition for this stuff at the kind of abstract level optimization or kind of at the algorithmic level. Like you pick up an example is it sensible to update when the gradient other than when the prediction equals the target. Okay, so so let's take the code that we have from our, from last time, and I'm going to expand on it a little bit, and hopefully set the stage for doing stochastic gradient. Um, okay. So, so last time we had gradient descent. Okay, so remember last time we defined a set of points, we defined the function which is the train loss here. Um, we defined the derivative of the function, and then we have gradient descent. Okay, um, so I'm gonna do a little bit of housecleaning and I'm just, uh, um, don't mind me. Um, okay so I'm gonna make this a little bit more explicit, what this algorithm is. Gradient descent depends on, um, a function, a derivative of a function and let say, um, you know, the dimensionality, um, and I can call this gradient FDF and in this case it's, uh, D where D equals 2. Okay, and I want to kind of separate. This is the kind of algorithms and this is, you know, modeling. So this is what we want to compute and this is, you know, how we compute it. [NOISE] Okay and this code should still work. Okay, um. All right, so what I'm gonna do now is, um, upgrade this to vector. So remember the x here is just a number, right? But we want to support vectors. Um, so in Python, um, we're going to import NumPy so which is this, uh, nice vector and matrix library um, and, um, I'm gonna make some, you know, arrays here, um, which this is just going to be a one-dimensional array. So it's not that exciting. So this, this w dot x becomes, uh, the actual dot I need to call. And I think w needs to be np.zeros(d). Okay. All right. So that's just- should still run actually, sorry, this is 1-dimensional. Okay. So remember last time we ran this, uh, this program and, um, it starts out with some weights and then it converges to 0.8 and the function value kind of keeps on going on. Okay. All right, so let's, let's try to, um, you know it's really hard to kind of see you whether this algorithm is any, doing anything interesting because we only have two points, it's kind of trivial. So how do we go about, um, you know, because I'm going to also implement stochastic gradient descent. How do we have kind of a test case to see if this algorithm is, you know, working? Um, so there's kind of this technique which I, I really like [NOISE] which is to call, generate artificial data and ideas that, you know, what is learning. You're learning as you're taking a dataset and you're trying to fit- find the, the weights that best fit our dataset. Uh, but in general if I generate some arbitrary, if I downloaded a dataset I have no idea what the right kind of quote unquote right answer is. So there's a technique where I go backwards and say, okay let's let's decide what the right answer is. So let's say the right answer is, um, 1, 2, 3, 4, 5. So it's a 5-dimensional problem. Okay. Um, and I'm going to generate some data based on that so that this, uh, weight vector is kind of good for that data. Um, I'm going to skip all my breaks in this lecture. Um, so I'm going to generate a bunch of points. So let's generate 10,000 point. The nice thing about artificial data is you can generate as much as you'd want. Um, there's a question, yeah? A true w? So true w just means like the, the correct, the ground truth, the w. The true y, true output or? So w is a weight vector. So this is kind of going backwards. Remember, I want to fit the weight vector but um, I'm just kind of saying this is the right answer. So I want to make sure that the algorithm actually recovers this later. Okay, so I'm going to generate some random data. So there's a nice function, random.randn which generates a random d-dimensional vector and y. I'm gonna set- what should I set y to? Which side of w you want? Yeah. So I'm gonna do regressions. So I want to do, uh, true_w dot uh, x, right? So I mean if you think about it, if I took this data and I found the, the like true one- w is the right thing that we'll get 0 loss here. Okay. But I'm going to make your life a little bit more interesting and we're gonna add some noise. Okay, so let's print out what that looks like. Also I should add it to my dataset. So okay, so this is my dataset. Okay, I mean, I can't really tell what's going on but, but you can look at the code and you, you can assure yourself that, uh, this data has structure in it. [NOISE] Okay, so let's get rid of this print statement and let's train and see what happens. So let's. Okay. Oh, one thing I forgot to do. Um, so if you notice that the objective functions that I've, uh, written down they haven't divided by the number of data points. I want the average loss, not the, the sum. Um, it turns out that, you know if you have the sum, then things get really big and you know, blow up. So let me just normalize that. Okay. So let me lock it. Okay, so it's training, it's training. Um, actually so let me, uh, do more iterations. So I did 100 iterations, let's do 1000 iterations. Okay. So when the function value is going down, that's always something to- you know, good to check. Um, and you can see the weights are kind of slowly getting to, you know, what appears to be 1, 2, 3, 4, 5, right? Okay. So this is a hard proof but it's kind of evidence that this learning algorithm is actually kind of doing the right thing. Um, okay so now let's see if I add, you know more points. So I now have 100,000 points. Now, you know, obviously it gets slower, um, and you'll, you know, hopefully get there you know, one day but I'm just gonna kill it. Okay, any questions about, uh, oops, my terminal got screwed up. Okay. So what did I do here, I defined loss functions, took their derivatives. Um, the gradient descent is what we implemented last time and the only thing different I did, this time is generated data sets so I can kind of check whether gradient descent is working. Yeah question. So the fact that the gradient is just the residual [inaudible] a algorithm to learn from overpredictions versus like underpredictions? The question is whether the fact that the gradient is residual allows the algorithm to learn from under or over predictions. Um, yeah. So the gradient is if you think about it, yeah that's good intuition. So if you look at, um, if you're over-predicting, right? That means the gradient is kind of- assume that this is like 1. So that means this is going to be positive which means that, hey if you opt that way, you're going to over-predict more and more and incur more loss. So, um, by subtracting a gradient, you're kind of pushing the weights out in the other direction and same for when you're, um, you're under-predicting. Yeah, so that's good intuition to have. Yeah. What is the effect of the noise when you generate [inaudible] What is the effect of the noise? Um, the effect of the noise, it makes the problem a little bit, you know, harder so that it takes more examples to learn. Um, if you shut off the noise then it will- you know, we can try that. Um, I've never done this before, but presumably you'll learn, you know, f- faster, but maybe not. Um, the noise isn't, you know, that much. But, um, okay. So, so let's say you have, you know, like 500 examp- 1000 examples. You know, that's quite a few examples. As in now, you know, this algorithm runs, you know, pretty slowly, right? And in- in modern machine learning you have, you know, millions or hundreds of millions of examples. So gradient descent is gonna be, you know, pretty slow. So how can we speed things up a little bit, and what's the problem here? Well, if you look at the- the- what the algorithm is doing, it's iterating. And each iteration it's computing the gradient of the training loss. And the training loss is, um, average of all the points, which means that you have to go through all the points and you compute the lo- gradient of the loss and you add everything up. And that's what is expensive and, you know, it takes time. So, you know, you might wonder, well, how, how can you avoid this? I mean, you- if you wanted to do gradient descent you have to go through all your points. Um, and the, the key insight behind stochastic gradient descent is that, well maybe- maybe you don't have to do that. So, um, maybe- you know, here- here's some intuition, right? So what is- what is this gradient? So this gradient is actually the sum of all the gradients from all the examples in your training set. Right? So we have 500,000 points adding to that. So actually what this gradient is- is, um, it's actually kind of a sum of different things which are maybe pointing in slightly different directions which all average out to this direction. Okay. So maybe you can actually not average all of them, but you can, um, average just a couple or maybe even in an extreme case you can just like take one of them and just, you know, march in that direction. So, so here's the idea behind stochastic gradient descent. So instead of doing gradient descent, we are going to change the algorithm to say for each example in the training set, I'm just going to pick it up and just update, you know. It's- instead of like sitting down and looking at all of the training examples and thinking really hard, I'm just gonna pick up one training example and update right away. So again, the key idea here is, it's not about quality it's about, uh, quantity. May be not the world's best life lesson, but it seems to work in- it works in here. Um, and then, there's also this question of what should the step size be? And in- generally, in stochastic gradient descent, it's actually even a bit more important because, um, when you're updating on each- each individual example, you're getting kind of noisy estimates of the actual gradient. And, uh, and people often ask me like, "Oh, how should I set my step size and all." And the answer is like there is no formula. I mean, there are formulas, but there's no kind of definitive answer. Here's some general guidance. Um, so if step size is small, so really close to 0, that means you are taking tiny steps, right? That means that it'll take longer to get where you want to go, but you're kind of proceeding cautiously. so it's less likely you're gonna, you know- uh, if you mess up and go in the wrong direction you're not gonna go too far in the wrong direction. Um, conversely, if you have it to be really, really, large then, you know, it's like a race car. You, kind of, drive really fast, but you might just kind of bounce around a lot. So, pictorially what this looks like is that, you know, here's maybe a moderate step size, but if you're taking steps, really big steps, um, you might go over here and then you jump around and then maybe, maybe you'll end up in the right place but maybe sometimes you can actually get flung off out of orbit and diverge to infinity which is a bad situation. Um, so there's many ways to set the step size. You can set it to a, you know, constant. You can- usually, you have to, um, you know, tune it. Or you can set it to be decreasing the intuition being that as you optimize and get closer to the optimum, you kind of want to slow down, right? Like if you- you're coming on the freeway, you're driving really fast, but once you get to your house you probably don't want to be like driving 60 miles an hour. Okay. So- actually I didn't implement stochastic gradient. So let me do that. So let's, let's try to get stochastic gradient up and going here. Okay. So, so the interface to stochastic gradient changes. So- right? So the- in gradients then all you need is a function. And it just kind of computes the sum over all the training examples. Um, so in stochastic gradient, I'm just going to denote S as for stochastic gradient. I'm gonna take an index I, and I'm going to update on the Ith point only. So I'm going to only compute the loss on the Ith point. And same for its derivative. Um, you can look at the Ith point, um, and just compute the gradient on that Ith point. Okay? And this should be called SDF. Okay. So now instead of doing gradient descent, let's do stochastic gradient descent. And I'm going to pass in sf, sdf, d, and, um, the number of points because I need to know how many points there are now. Um, copy gradient descent, and it's basically kind of the same function. I'm just going to stick another for loop there. So stochastic gradient descent, it's going to take the stochastic functions, stochastic gradient, the dimensionality and- Okay? So now, before I was just going through, um, number of iterations and now, right, I'm not going to try to compute the value of the- all the training examples. I'm going to, um, loop over all the points and I'm going to call just evaluate the function at that point I, and compute the gradient at that point I instead of the entire, you know, dataset. And then everything else is the same. I mean, one other thing I'll do here is that I'll use a different step size schedule. So um, 1 divided by number of updates. So I want it so that the number of, uh, the step size is gonna decrease over time. Okay, so I start with a equals 1 and then it's half, and then it's a third, and it's a fourth, and it keeps on going down. Um, sometimes you can put a square root and that's more typical in some cases, but, um, I'm not going to worry about the details too much. Uh, question? The point I is the chosen randomly but here we just [inaudible]. Yes. The question is- the word stochastic means that there should be some randomness here. And, you know, technically speaking, the- the stochastic gradient descent is where you're sampling a random point and then you're updating on it. I'm cheating a little bit, um, uh, because I'm iterating over all the points. You know, in practice if you have a lot of points and you randomize the order it's kind of- it's- it's similar but it's- there is a kind of a technical difference that I'm trying to hide. Okay. So- so this is stochastic gradient descent. Um, to iterate, you know, go over all the points and just, you know update. Okay? Um, so let's see if this works. Um, okay. I don't think that worked. [LAUGHTER] Maybe- let's see what happened here? I did try it on 100,000 points. Maybe that works. And, nope, doesn't work either. Um, anyone see the problem? [inaudible] So I'm printing this, um, out, uh, at the- at the end, um, of each iteration. So that should be fine, um. Really, this should work. So gradient descent was working, right? Maybe I'll, I'll try- It's probably not the best idea to be debugging this live. Okay. Let's, let's make sure gradient descent works. Um, okay, so that was working right. Okay. So stochastic gradient descent. I mean, it's really fast and converges, [LAUGHTER] but it doesn't converge to the right answer. I think [inaudible]. Yeah, but that should get incremented to 1. So that- It might be true. Okay, so I do have a version of this code that does work. [LAUGHTER] So what am I doing here, that's different. Okay, I'll have some water. Maybe I need some water. [LAUGHTER] Okay, so this version works. Yeah. [inaudible] Yeah, that's- that's probably good. That's a good call. Yeah. okay. All right. Now, it works. Thank you. [LAUGHTER] Um, so yeah. Yeah, this is a good lesson. Um, it's that when you're dividing, um, these needs to be one- actually in Python 3, this is not a problem but I'm so- on Python 2 for some reason. But this should be, uh, 1.0 divided by numUpdates. Otherwise, I was getting- So how is it faster? Okay. So why is it faster? [LAUGHTER]. Yeah, okay. Okay. Let's- let's, uh, go back to 500,000, okay. Okay. So one full sweep over the data is the same amount of time. But you notice that immediately, it already converges to 1, 2, 3, 4, 5, right? So this is like way, way faster than gradient descent. Remember, I just, uh, kind of compare it. Um, gradient descent is, um, you run it. And after one stop, it's, like, not even close. Right. Yeah? What noise levels you have to have until gradient descent becomes better? What noise levels you have to have until gradient descent becomes better? Um, so it is true that if you have more noise, then gradient descent might be, uh, stochastic gradient descent can be unstable. Um, there might be ways to mitigate that with step size choices. But, um, yeah, probably, you have to add a lot of noise for stochastic gradient to be, um, really bad. Um, I mean, this is in some sense, you know, if you take a step back and think about what's going on in this problem, it's a 5-dimensional problem. There's only five numbers and I'm feeding it half a million data points, right? There, there aren't- there's not that much to learn here. And so there's a lot of redundancy in the dataset. And generally, actually, this is true. I go into a large dataset, there's gonna be a lot of, you know, redundancy. So, uh, going through all of the data and then try to make an informed decision is, you know, pretty wasteful, where sometimes you can just kind of get a representative sample from, um, one example or more as common to do the like of kind of mini-batches where you maybe grab a hundred examples and you update on that which is- so there's a way to be somewhere in between stochastic gradient and gradient descent. Okay, let me move on. Um. Okay. Summary so far, we have linear predictors, um, which are based on scores. So linear predictors we include both classifiers and regressors, um, we can do loss minimization, and we can, uh, if we implement it correctly, we can do, uh, SGD. Okay. So that was- I'm kind of switching things. I hope you are kind of following along. I'll introduced binary classification and then, I did all the optimization for linear regression. So now, let's go back to classification and see if we could do stochastic gradient descent here. Okay. So for classification, remember, we decided that the zero-one loss is the thing we want. We want to minimize the number of mistakes. You know, who can argue with that? Um, so rem- remember, what is zero-one loss look like? It looks like this. Okay? So what happens if I try to run stochastic gradient descent on this? Um, I mean, I can run the code, but [OVERLAPPING] yeah, it's- it won't work, right? And why won't it work? [inaudible]. Yeah. So two popular answers are it's not differentiable, that's- it's one problem. Um, but I think that the- the bigger problem and kind of deeper problem is that, what is the- what is the gradient? Zero. Zero. It's like zero, basically everywhere except for this point, which are, you know, it doesn't really matter. So, um, so as- as we learned that if you try to update with a gradient of 0, um, then you, you won't move your weights, right? So gradient descent will not work on the zero-one, uh, loss. Um, so that's- that's kind of unfortunate. So how should we fix this problem? Yeah? [inaudible] Yeah, let's, let's make the gradient non-zero. Let's skew things. Um, so there's one loss, which I'm gonna introduce called the hinge loss, which, uh, does exactly that. Um, so let me write the hinge loss down. And the hinge loss, um, is basically, uh, is zero here when the margin is greater than or equal to 1 and rises linearly. So if you've gotten it correct by a margin of 1 so you're kind of pretty safely on the err side of, um, getting it correct, then we won't charge you anything. But as soon as you start, you know, dip into this area, we're gonna charge you a kind of a linear amount and your loss is gonna grow linearly. Um, so there's some reasons why this is a good idea. So it upper bounds the zero-one loss, um, it's, uh, it has a property called- known as convexity, which means that if you actually run the gradient descent, you're actually gonna converge to the global optimum. Um, I'm not gonna get into that. And so that's, you know, that's a hinge loss. Um, so what remains to be done is to compute the gradient of this, you know, hinge loss, okay? So how do you compute this gradient? So in some sense, it's a trick question because the gradient doesn't exist because it's not, um, you know, differentiable everywhere, but we're gonna pre- pretend that little point doesn't exist, okay? So, so what is this hinge loss? The hinge loss is actually two functions, right? There is a zero function here and then there's like this, uh, 1 minus x function. So what am I plotting here? I'm plotting the- the margin and, uh, the loss. Okay? So this is, uh, the zero function, and this is, uh, 1 minus, uh, w dot phi of xy. And the hinge loss is just the maxima of these two functions. So at every point, I'm just taking the top function. So um, that's how I am able to trace out, uh, this- this curve. Okay? All right. So if I want to take the gradient of this function, you know, you, you can try to do the math. Well, let's think through it. You know, what- what should the gradient be? Um, we're, we're here, what should the gradient be? It's zero. And if I'm here, what should the gradient be? It should be the- whatever the gradient of this function is, right? So in general, when you have a gradient of this- of this kind of max, uh, you have to kind of break it up into cases. Um, and depending on where you are, um, you, you have a different case. So loss is equal to- if I'm over here, and what's the condition for being over here? If the margin is greater than 1, right? And then otherwise, I'm going to take the gradient of this with respect to w, which is gonna be minus phi of x y, you know, otherwise. Okay? Um, so again, we can try to interpret the, the gradient of the hinge loss. So remember your stochastic gradient descent, you have a weight vector, and you're gonna pick up an example and you say, Oh, let's compute the gradient move away from it. So if you're getting the example right, then the gradient zero don't move, which is the right thing to do. And otherwise, you're going to move in that direction because you're minus, minus of phi of x y, which kind of imprints this example into your weight vector. So- and you can formally show that it actually increases your, uh, margin after you do this. Okay? Yeah? What's the significance of the margin being 1? What's the significance of the margin being 1? Um, this is a little bit arbitrary, you're just kind of sending a non-zero value. Um, and, and, you know, in support vector machines, you set it to 1, and then you have regularization on the weights and that gives you, uh, some interpretation. So I don't have time to go over that right now, but, uh, feel free to ask me later. There's another loss function. Uh, do you have a question? Yeah. Why is the or why do we choose the margin if it's a loss function that's supposed on the square or another loop? Yeah. So why do you choose the margin? So in classification, we're gonna look at the margin because that tells you how comfortable when you're predicting, uh, co- you know, correctly. In regression, you're gonna look at residuals and square losses. So it depends on what kind of- what problem you're trying to solve. Um, just really quickly, some of you might have heard of logistic regression. Logistic regression is this, uh, yellow loss function, right? So the point of this is saying that this loss minimization framework is, you know, really general and a lot of things that you might have heard of least squares logistic regression are a kind of a special case of this. So if you kind of master how to do loss minimization, you kind of, uh, can do it all. Okay. So summary, um, basically, what's on the board here? If you're doing classification, you take the score which comes from the, uh, w dot phi of x and you drive it into the sign, and then you get either plus 1 or minus 1. Regression, you just use a score. Now to train, you have to assess how well you're doing. In classification, there's a notion of a margin. Res- uh, in regression, it's the residual, and then you can define loss functions. And here is we only talking about five loss functions but there's many others, um, especially for a kind of structure prediction or ranking problems, there's all sorts of different loss functions. But they're kind of based on these simple ideas of, you know, you have a hinge, the upper balance is zero-one if you're doing classification and, [NOISE] um, some sort of square-like error for, you know, regression. And then, once you have your loss function, provided it's not zero-one, you can optimize it using, um, SGD, which turns out to be a lot faster than, you know, gradient descent. Okay. So next time, we're gonna talk about, uh, Phi of x, which we've kind of left as, you know, someone just hands it to you. And then we're also gonna talk about what is the really true objective of machine learning? Is it really to optimize the training loss? Okay, until next time. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Factor_Graphs_2_Conditional_Independence_Stanford_CS221_AI_Autumn_2019.txt | Today, we're going to be covering CSPs, but possibly just as importantly, you're all halfway through the quarter. So congratulations. I think six more weeks to go, so keep it up. Uh, also happy Halloween, um, hope you all have fun. Uh, but today, we're going to be talking about CSPs, so this is continuing, um, these topics that we've been covering since Monday, which is about, um, this kind of setting, right? So you have variables, in this case we have three, X_1, X_2, and X_3. And each variable represents some kind of discrete object that can take on one of several values. And so the set of values that a single variable can take on is called its domain. [NOISE] And, um, just to continue reviewing, these variables have factors which are functions that take as arguments one or more variable. And the factors basically say, okay, how much do I like this assignment of my variables? So for example, factor 2 looks at X_1 and X_2 and it says, okay, X_1 has been assigned some value, X_2 has been assigned some value. How much do I like that assignment? Do I really like it? If so, you give it a high value, do I not like it? If so, you zero it out. Um, and so importantly, um, we call the arguments, so all the variables that we give to a factor, that's the scope of that factor. Um, so we had this example that we talked about on Monday, and I'm just going to revisit it. So in this case we have, um, variables correspond to people and there's three people. And, um, we know that first of all, the first two people, person 1 and person 2, they have to agree to each, each other. And then person 2 and person 3, they sometimes agree with each other, but not all the time. Um, and so first, what we say, is with our first factor, we say that person 1 is definitely blue. So we say this here, um, by saying these are the values that the variable can take on, red or blue. And this is how much we like that value. And what this fac- first factor is saying, is that it doesn't like red at all, it's a 0, and it likes blue, it's a 1. And then we have the second factor, this encodes the fact that they must agree. And what that's saying is that every time, both the first variable, which is X_1 and the second variable which is X_2, every time they agree, where either they're both red, or both blue, we give it a 1, otherwise we give it a 0. And then we have this factor which says that they tend to agree, and we're encoding that by saying, well, if they're- if both of its arguments are the same, if they're both red or they're both blue, then we give it a slightly higher number than if they differ, but we're not going to zero it out. And then last we say, okay, well, the las- last person kind of prefers to go red, but, uh, you know, it's not a hard constraint, nothing like that. Um, and so then again, so like we talked about last time, um, assignments, which is like if you have all your variables and you have all the values that you've assigned to them, they have what's called a weight. And a weight is basically just, um, plugging in all the values for your variables into your factors and then multiplying them all up to get that product. And that's the weight. And our goal across all of these problems, in this whole unit, is to find assignments for all of our variables that will give us the maximum weight, when we multiply up all of our factors, Yeah? Is there any particular reason to why you define this product as opposed to a sum? So the question was, why is the weight a product and not a sum? And the answer for that is because, um, remember these are constraint satisfaction problems. And so if you look back at this example, we wanted to encode the constraint that they must agree. And we did that by putting in a 0. And since we're multiplying it up, if you have a single factor that gives a 0, it's like a veto. Um, and so that veto power is actually really critical, and we leveraged it, um, on Monday, which I'll talk about soon. Um, so on Monday, we talked about an- one algorithm for solving these kinds of problems. We called it backtracking search. And it's, it's a kind of an exhaustive, you could think it as like a depth-first search of all the possibilities. So we have this example, um, from last week where we were coloring provinces in Australia, right? Um, so we have- I actually don't know, the provinces of Australia, but we have WA and we have V and we have T, I think, T, is Tanzania, um, Tasmania. Okay. Um, I'll work on my geography. In fact, uh, we color them all red, right? And so what we decide to do is we decide that, hey, we'll color Q, red. Let's do it. And then we say, hey, if we color, Q, red, let's go with, um, green here for NT, and then, we pick blue for SA, and then, um, green for NSW, and we've completed that tree. We've gone as far as we can, we've found a coloring that works. We say, oh, wait, what if we backtrack up to this point and try to sub in blue in for NT, instead of green, which is what we had before? And then we can do the same thing. We go down that branch of the tree, or we can try different colors for Q. So for example, what if we try Q, over here? We take that tree down. Oh, but that gives us something that is, um, is not as satisfiable. So for example, here in SA, no matter what color we give it, according to our factors which say like you can't have two neighboring colors, it will be not allowed, um, and you fill up the tree this way. And so this, this is the algorithm we covered and, um, it works because it always gives you a good version, but it's not working because it's super slow. It's exponential time. So there's N nodes and each node has a domain. And so you're, you're- it's like for each value of, um, here, I can draw it out. So like for example, if we had two fact- if we had two variables, X_1 and X_2, um, and they both took on three values. [NOISE] So these are va- our, um, variables and these are the values they can take on. And these are how much our factors like them. Um, then you would have to say for each value in X_1, for each value in X_2. So it's this, um, exponential blow up, which is just very slow. And so we learned some kind of, uh, like heuristic ways to speed things up and prune off that tree. So we did forward checking, um, which is where like once you decide my value for one variable, I'm going to go ahead and propagate that decision as far as I can. Um, that shrunk it a little bit. We looked at dynamic ordering, which is like, okay, which variable? And I'm going to choose to work next. And then once I've chosen that variable, I'm going to choose my value a little more intelligently. I'm going to pick the thing that has the least wiggle room because maybe that'll help us, um, all- once I can, and that helped us a little bit. Um, but at the end of the day, these constraints, they helped us prune the tree, but they can only work for constraints. Only if a factor gives us a value of zero, does it work. Because that's when it has veto power. And we use the vetoes to say, this branch of the tree will never be useful, so we can never go down it. So if we have factors that are going to be non-zero, we can't use any of these things, Yeah? Could you have a little more on the example of how you got the numbers. So like for, for one variable there's only one, and then I guess it's 2_N minus 1. Yeah. So for this, I guess this example, what- it would look like as a graph. I guess it wasn't the best example. But, and actually later in lecture today, we're gonna talk about exactly how you could go about this more smartly. But let's say you just had two variables, and each had unary factors. If you're already running binary like backtracking search, we would still say were- we would try all different combinations of both variables, um, which is a very dumb thing to do. [inaudible] Yes, so that's the backtracking search. Interesting. Yeah. So we'll discover smarter ways to get around that. Um, yeah, so that's backtracking, slow, but it gives you the optimal solution every time, so maybe a mixed bag. Um, okay, so I'm going to lead into this as a running example that we're gonna be talking about the whole lecture, Object tracking. So with object tracking, you have sensors that are telling you like, oh, my object is here, no, it's down here. Wait, it's over here. And what you wanna do is you want to take that noisy observation and, and run it through your CSP to infer a more realistic estimate of where the object actually is. So I will also draw this on the board. Um, [NOISE] so, so what this looks like is if we have, um, this is time, so we'll call it T for time. And this is position, uh, so we'll call it p for position. And what this is going to say is we have sensors that are giving us estimates, like noisy estimates for where this thing is that every point in time. So for example, maybe at time step 1, um, we get an observation down here. At time step 2, it's up here, and at time step 3, it's up here. Um, but in reality, like we want maybe something more like that. Um, and so that's going to be our goal with this running example. Um. [NOISE] So this is how we encode it into a factor graph. Um, in, in this case, we have variables, where the variable is our guess for the real location for that object. We have two kinds of factors, we have unary observation factors, um, that say how close is our guess to the observation, to what our sensor said. And then we have transition factors, which basically tell us, um, you can't change your guess by too much from timestep to timestep. So on this graph, what it would look like is if these are observations, so this is like o_1, o_2, and o_3, um, then maybe our first guess would be here so we have X_1 down here, we have X_2 down here, and then maybe like X_3 up here. Um, and so our observation factor is going to look at this distance. And that would- what this is going to say is, okay, how far is our estimate from the observation? And they want that to be close. And then our transition factors are gonna look, are going to look at this distance between one object- one guess and the next guess, and say, well, our guesses shouldn't be moving too much. Uh, yeah? What's the difference between observation and estimate? Um, so the observation is what this sensor gives us, and the estimate is, is us saying thank you for the- thank you sensor. Um, now, I think the person is actually right here, because the sensor is noisy, we can't trust it, yeah. Um, okay, so that's how we're going to set up this problem. Um, and I think so there's and, and- there's this really cool kind like Java applet that you can all play around with on your own time, um, and I will briefly walk you through it. Um, so what's going on here is, um, basically- so there's a lot of documentation that you can read. Um, but basically what's happening here is we're just creating these variables, we have three variables, and we're allowing them to go in three positions. So in position 0, 1 or 2. Um, and then this is a little function that's basically encoding the fact that if things are nearby, then we want it- then we like that. So we have two variables, A and B, and if they're in the same position, then we return 2, we really like that. Um, if they are only one away from each other, we return 1, so it's okay, we'll take it. And if they're further away than 1 from each other, then it's- we zero it out. That's a hard constraint, we don't like that. And then we have this observed function, which is kind of a higher-level thing, and it kind of, um- I guess you could say it kind of like preloads our nearby function, um, with, with a variable. Um, we're going to create our factors, and then this is what it looks like. So we have, um, we have three variables, we have X_1, X_2, and X_3, and then we have our observation factors are unary. Um, remember that say, okay, you have to be close to the sensor, and then our transitional factors are binary. And that says you can't move too much between time steps. Um, and you can run it, and there's actually a lot of output here, and I'm gonna ignore, um, most of this for now. I think the thing that is important here is that we ran backtracking search, and we found the optimal assignment. So in this case it's 1 for X_1, 2 for X_2, and 3 for X_3, which gives a final weight of 8. So on our little drawing, um, basically what that's saying is, is it saying something like this is optimal where we, we put- we say, thank you sensor there for these estimates, but we think the person is actually here at time step 1, here at 2, and here at 3. That's the solution that backtracking gave us. [NOISE] Uh, yeah? [inaudible] so those are the weights or the timesteps because you were saying, it's like one then two. So X_1 1 this is a timestep 1, and X_2 is timestep 2, but X_3 is also timestep 2. Yes, so the question was, what do you mean by here at 3? And what I mean is basically, so X_i, X_i is an- is our, um, estimate for the position of this object. And so at timestep 3, our estimate for the position is at position 2. And. So is your timestep [inaudible]? Uh, yeah, so X_2 and X_3. So that's why there's X' at two for both time- this is like timestep 1, 2, and 3. Yeah. Yeah? So I- I understand like what you're talking about with the scope of the factors, but how exactly is their constraint being [inaudible] by the problem? Yeah, so the constraint here is, um- so the fact that if we look at our nearby function, and if A and B are farther than 1 away from each other, then it returns 0. And that's a constraint because when we're calculating the weight of a factor graph, we multiplied together all the factors. And so if there's a 0 in there, then the whole weight goes to 0. Um. The constraint is only for the transition factor but not for the [inaudible]. Yep, yeah, you can say that, yeah. Great, um, okay so that's our setup, and we're gonna be returning to that a few times. Um, awesome. Uh, okay, so moving on backtracking search, very slow, let's try to speed it up. Beam search, faster. Yay, uh, beam search, so backtracking search, if you- we have that tree analogy, right? And backtracking search exhaustively searches the entire tree, gets us the best solution, but it's very slow. Um, and so one way to exha- avoid this kind of exhaustive search is greedy search, which is where you greedily- it's like, um, for each variable, you greedily select the value that gives it the highest weight. Uh, so it's right here, you look at the values it can take on, and you just choose whatever variable- whatever value is best. And you never look back, you just keep on running through it. Um, and you go through the whole tree this way until you end up at a complete solution. Uh, so the benefits is its very fast, right? It's linear. But the con is that it's a very narrow window, like you don't see a lot of the state space, you don't explore a lot, and so you can often miss the global max. Um, so for people who, who prefer this kind of notation, what we are doing is we basically say, we loop through all the variables, and, um, we try out every value, and we just take whatever value has the highest weight. Um, yeah, so beam search, um, is kind of like an in-between backtracking and greedy. Beam search is very cool, so one way to think. Yeah? Excuse me. Come up again, [inaudible]. Yeah, so the question was explain that greedy again, and so with greedy, what we're doing is, is we say, so we have- we have a partial assignment, right? And we pick, we want, we want to extend our partial assignment. So we pick a new variable, and we try out every single value that that variable can take on, and then we take the value that, that, that gives it the highest weight. So all the factors touching that variable are the most happy with that value. And we pick that, and then we never look back, and then we pick a new variable. [inaudible] the new value. Yeah, so that's a good observation, we can end up at inconsistent solutions, and that's totally true. Um, so you can actually, you can- and during our greedy search, you can actually kinda like find your way in kind of a hole where it's like, oh damn, you know what? [LAUGHTER] I can't go any further. Um, and it's a big problem with greedy search. Yeah, so with backtracking, um, beam search is kinda like a heuristic way to maybe get around that. So with beam search, so remember again, in, in , um, so for greedy search, we had one partial assignment, right? And we were choosing one variable, and choosing the extension of that one variable. With beam search what you do is instead of one partial assignment, you maintain a list of k partial assignments. In this case, k is 4. And then what you do is on every step is for each of your partial assignments, you pick a new variable, you try all the ones. And then what you do is you- so you have your k partial assignments, and you try to extend every partial assignment, you test out all the values for every partial assignment, and then you sort those partial assignments based on their weight, and then you just take the top k. So it's like you have your partial assignments, you extend them into all the possible successors, you sort them based on weight, and then you take the top k. So in this case, if we have, um, four partial assignments, then we try extending them all in the two directions they can go, and then we sorted them and then we took the four, you can see there's four things that are filled in that have the highest weight. And we continue this procedure. So we say, um, so what we would do in this case is we would say, okay, for each of these four solid things, we're going to try out, we're going to extend each of those partial assignments, and then we sort all the extensions and select the top k. Um. So yeah. It won't start if it is not K partial sums yet. You just sort of [inaudible]. Yeah, exactly. So the question was up to the part K, right? And the que- and the answer is yes. Yeah. So for example here, for example, 1 is less than K so we extend to 2. 2 is still less than K, so we can extend completely. Um, so in- in notation, um, uh, yeah, so we say for each- for each variable we- we try to extend each of our partial assignments and then we prune out everything but the K largest, um, like best K weights. Um, so beam search is also not guaranteed to find the best weight. Um, um, but what's cool about it is that it- it gives you kinda like a knob that you can control between being greedy and exploring a lot. So if K is very wide, then you explore more and more of the tree, um, and if K is actually infinity- I think this is on a slide soon. Um, yeah, so if K is infinity, then that's actually like I can't do a breadth-first search of the entire tree. Yeah. So on that graph or on the picture where you have like the solid and the shaded out ones- Yeah. For that shaded out grey ones we would actually never explore those. Yeah. So- so this picture, I don't think it's the best picture because, um, so for example, okay, so I'm looking like- I'm looking right here, and in reality you would never extend this. Because it was never selected, um- But you're still like initially explore it and then find that it's not- Yeah, so up here, so at this point, we do consider it, because we extend down, but then we decided not to select it. Yeah. Awesome. Um, runtime. So for- okay, so for beam search you're selecting a single node, and then for, um, for each partial assignment, you're trying out every value in the domain. So if b is the size of your domains, uh, then you're trying out b things for every partial assignment, and you have K partial assignments, right? So tho- those are the number of extensions you have as Kb, and then to- you sort them, and to sort it take- if you have a list of length n, sorting is n log n, and so you sort your list of Kb, so that becomes Kb log Kb, and then there's n nodes that you need to- the height of this tree is n. So you do this n times. Uh, so like I said, beam search of the K gives you this really cool knob, between do you wanna explore everything, or do you want to focus in on um, on, you know, being fast and greedy? Um, okay. So, uh, everything until now what we've covered is- is extending partial assignments. So we have- we're giving like a blank slate, picture of Australia with no colors, and we say, "Color me Australia." Like build up this house from the foundations, and now what we're gonna talk about is, okay, given like a map of Australia that's already filled with covers- colors, how do we make changes to it in order to improve it? And that's local search. Uh, so the first algorithm is called Iterated Conditional Modes, ICM, um, and what ICM is doing is it says, okay, we pick one variable, and then we ask, how- what is a new setting that we could choose for this variable that would improve the overall weight? Um, so in this case, we have one variable, it's x_2, and we try out all the different values it can take on, which is 0, 1 or 2. Um, and then for each of those values we go through and recompute the weight, and then we pick whatever value is best. So we start with 0, 0, 1, and here it looks like 1 is a better choice. So we go with it, and- and from this, from now on we would say x_2 is equal to 1. Um, so something cool about ICM is that when you're evaluating a new value for a variable, you only need to consider factors that touch that variable. That's all you really need to recompute because everything else is constant with respect to it, and so that gives you big, big, big time savings in practice. Uh, one last thing is so the name Iterated Conditional Mode. So iterated comes to the fact that you could solve the whole CSP this way. If you just iterate over which variable you're selecting. Um, conditional means that once you select a variable, you're clamping down the values of everything else, and then modes are saying, once you select your variable, you- you try out every single value for that variable. Excuse me. Yeah. It just like kept bringing us every variable would you eventually like arrive at the optimal solution or could you like kinda like end up in like- Yeah. So the question is if you kept- so if you- if you have your three variables and you keep on going through them and- and choosing one, clamping the others, and then choosing the best value for it, would you arrive at an optimal solution? And, uh, the answer is no. So, and we'll see- and we'll see that in practice, yeah. Um, yeah. So again, this does- this is just to give you an illusion that we iterate through these variables [NOISE], and for each variable, um, we pick a weight that improves it. Um, [NOISE] so we have this in the demo. So in this case what's going on here, is we're saying, okay, right now we've selected, we're looking at x_1. Um, we start with a random initialization. We start- we're looking at x_1 and- and these are the different values that x_1 could take on, and then we go through, um, and calculate the weight, and we say, okay, 1 is the best weight for x_1 right here. So we choose 1 to be x_1, and we step again, um, we're looking at x_2 now, and oh, it looks like actually a value of 1 is better for x_2, and so from now on, we choose the value of 1 for x_2. And we iterate again, now we're looking at x_3, and it looks like we choose the value of 1, and you just keep on iterating through this until you hit some kind of local optimum. And why I'm saying this is important that it's a local optimum because right now, um, it's converged, so I can keep on pressing step and it's not gonna change, but the weight is 4. And if you remember during the other thing when we were in back-tracking we actually found an assignment with a weight of 8. So it does- it can fall into these local optima. Um, one way around this is a second algorithm called Gibbs sampling. Um, with Gibbs sampling what we do is we injected some randomness into the process to try to like bump us out, um, bump us out of those local optima, into something that can maybe get us into a better area. Um, so basically Gibbs sampling is super similar to ICM. The only difference is that instead of, so you- you try all the values, and instead of selecting the value that gives you the biggest weight, you sample the value, um, according with probability that's proportional to its weight. So for like this example, we say, se- uh, setting of 0 would give us 1, value of 1 would give us weight of 2. Value of 2 would give us a weight of 2, and then to get the probabilities we take the weight and we sum by- we divide by the total, which would in this case will be 5. We sum all up and divide by that. That gives us the probability of 0.2. So here, we say it's 2 divided by 5, which is 1 plus 2 plus 2, and then we use that probability distribution to sample a new value for x_2 instead of just choosing and say, oh, you're the best. Um, so this is the demo. So in this case we're looking at x_1, and we're trying out different probabilities for it, and we have weights, and then that gives us a probability distribution, and we sample from this probability distribution. So in this case, it looks like we sampled and we chose 0 for x_1, and then we keep a record of how many times we've, um, ended up with cer- a certain assignment. So if we step again, now we're looking at x_2. Um, it looks like x_1, um, a value of 1 is the only thing that works. So we choose that value, and we add it to the counts. Um, and you can just keep on running this process, um, and over time [NOISE] you build up this kinda like probability distribution. Um, so this is actually an unlikely sample. Um, so I had a point- I had 20% chance of choosing 1, and 80% chance of choosing 2, but it still chose 1, and that's a way that it can kinda like break out of these local optimums. Um, but in any case, if you look at this table, then what you find is that over time, if you run this thousands, millions of times, then, um, in practice, uh, settings with very high weights will occur very often. Um, and actually when we get into probabilistic graphical models, which Chris will talk about soon, um, you could actually say that, um, the global optimum will be the most frequent, in Gibbs sampling in the limit, which is, I think pretty cool. Um, yeah. Okay, so just to- just to show an example of- of what can go wrong with this algorithm, um, it's still flawed. So if we have x_1, and we have x_2, and um, let's say, um, let's say it's two people and they're trying to decide where to go to dinner. So we could have, um, let's say they're deciding between vegetarian and going to a steakhouse. So we have V for vegetarian and S for steakhouse, and they really both want to go to vegetarian, um, and they want to eat together, um, and they'll eat steak but they're not super crazy about it. So if you're in this state, then even if you're doing Gibbs sampling, it's really hard to bounce over here because you have these two kinda like transitionary settings in between, and in order to make it to both vegetarian, um, they're gonna have- one is gonna have to make the decision to go to a different restaurant, and so since this is so low priority, it's- it's gonna be very difficult for these two people to go over to vegetarian. They're both kinda like, oh, I wanna do what you wanna do. You wanna do steakhouse, right? I wanna eat together, you wanna eat together, let's do steakhouse then, and they- they both don't really know that they will be much happier overall if they both get vegetarian. Um, and then this- this would really be even worse if- if you had, so for example, if you had like zeros here, then there's actually no way for them to get there. Because there's no probability there. Yeah. So this is like it's basically very initialization, like centric, but whatever you pick is your initialization is gonna be very important. Because if you initialize to the state where they're both vegetarian, then you'd never want to leave and you'd be happy to never leave it. But if you initialize to both steak then you're in trouble. If you initialize [OVERLAPPING]- So in general, in optimization, um, what- in any kind of optimization area where- where you can have this problem of falling into some kind of local optimum that's worse than the global optimum, it's very dependent on your initialization. Yeah, because if you initialize over here then you'd fall somewhere lower, just by chance. Yeah. But what if you like initialize that like SV or VS, where you initialize to a state where it was zero, would you have an equal chance of [NOISE] one? Um, so the question was, what if you initialized into a state that had zero probability? Uh. So what you would do is you would say, is you would select a variable, you would select S1, and then you would try out different values for this. You would say, "Okay, I'm going to either choose S or V holding X2 constant." In that case, transitioning to V would have a probability of 1. So you would do that almost deterministically. [inaudible] or rewards? Um, so for Gibbs sampling, these are probabilities, you turn them into probabilities. Yeah. Yeah. So just to clarify, um, Gibbs sampling is not guaranteed to find the best assignment? It is not guaranteed, yeah. But you mentioned something like, in the limit it is? Um, yes. But that's- so that's not like- that's kind of a theoretical point, it's not really practical. Yeah. it's- so in the limit doesn't mean like, I guarantee, I guess, um, yeah, I guess you could interpret it that way. Um, yeah. So I guess, this is directly about your question. So if you were to compare these questions, uh, which ones are guaranteed to give you the maximum weight assignment? And the only one is backtracking search, because greedy is too narrow, beam search is maybe too narrow, um, ICM is too myopic, and Gibbs sampling is you- it's not a guarantee, it's likely, but it's not guaranteed. Yeah. [inaudible]. Sorry. Can you say it again. For this example, even if the limit, it wont converge to the optimum [inaudible]? Yeah. So the question was, In this example, even in the limit, you wouldn't converge, and that's true. So if you- if you initialized right here, there's no way for you to get over there. Yeah. So yeah. I think it's just in certain conditions. Yeah. So that I did- I wasn't- my intent wasn't to confuse you guys. So it- let say that it was a theoretical point for like, some subsets, some CSPs yeah, yeah. Like you said, if we suppose initialized X steak state it- there's no point that we can go to like, vegetable, vegetable, right? Because in the- the middle transition part equal 0. So, uh, is it like safe to say that, uh, when you are modeling a problem, uh, with Gibbs sampling, you should never read, even if you encounter probability zero just give them some epsilon probability so that there might exist some chance to, uh, go to the optimum? Yeah, so the question was, uh, is it worthwhile to add some like plus 0.0001 to these just to give it some probability? And I- my intuition says that's it sounds like a pretty good idea, uh, it sounds kinda like adding tiny little epsilons to avoid division by 0, but, um, I think it would depend on the problem that you're solving. Though, you can imagine some cases where you would really want those zeros. Um, okay. So just to summarize so far, um, we've learned two ways of extending partial assignments in these graphs. So one is backtracking search we learned last time, which is like a full search of that tree, gives you the exact solution every time super slow. Beam search is approximate, and it gives you this cool little knob to trade off, um, speed and I guess, you could say like success, um, or exactness. And then second, we learned ways to given a assignment to improve it, to modify it. One was ICM, which was approximate, and then, um, the other is Gibbs sampling, which is also approximate, but it uses some probability, um, and some randomization. And now what we're going to look at, we're gonna look at two ways to solve these kinds of problems by actually changing the structure of the graph itself. Um, so our motivation comes from Australia, um, Tasmania, [LAUGHTER] That is- that is the motivation. So Tasmania, if you remember, was completely disconnected, right? So I think we called it red in the previous example, but it doesn't really matter what color we gave her, right? We give it anything we wanted. And so what we want is we want to kind of leverage this property. And more than leverage it, we want to- we want to kinda like inject this property into graphs that don't exhibit it. We want to- we want to like build this probability, and this property out of graphs. Um, so first we were like, what is this? It's called independence, and it can speed things up. So just like I said before, um, so is it still there? My old- Oh, it's partially erased. [LAUGHTER] Um, so if we write this down again. Um, so this is the same examples before. We have two variables each with a unitary factor on it. Um, with backtracking search, I told you what happened, right? You do something really dumb, which is you do, it's- it's like two for loops, right? You try out every combination of them, which is exponential. Whereas what you could really do is just say, "Okay, for each thing, I'm gonna choose the value that maximizes, that makes my factor most happy." And that's linear time. And so that would give us a more efficient algorithm. Um, oh, no. Okay. And we call that property independence. So A, um, so in this case, X1 and X2 are independent, and the reason that they're independent is that there is no factor connecting them, there's no paths between them, and there's no edges between them. And so we call that independence. And that's the same thing with Tasmania and the rest of Australia. Um, so I don't know how independent it is, like culturally or politically, but it certainly is in terms of like map coloring, graph theoryness. Uh, in symbols, so we use this, we use this kind of cool-looking pipe thing, um, and that denotes independence. So yeah, so like we said, Tasmania is independent. Um, what about cases like this? So it's not quite independent, but it's almost independent, you know, only if X1 didn't exist, then the rest of them would be independent. Um, and so that- this is where we introduced this idea of conditioning. And, um, conditioning is a way to rip nodes out of a graph. So- and we do that by- by saying, okay, in- in this example, let me draw it up. Um, so it's X1 and X2, and then it's 1, 7, 3, 2. 1, 7, 3, 2, and red. So 1, 7, 3, 2, red, red, red, blue, blue, red, blue, blue, 2, 3, 7, 1. Okay. So we're saying X_1 and X_2 are connected to each other with the factor right? [NOISE] But what if I say X_2 is definitely blue? Yeah. So we're talking about constraint satisfaction problems, but also like graph problems will do well, like all constraint satisfaction problems can be all be written as like graph problems? Yes. So the question was, can, uh, constraint satisfaction problems be written as graphs and as all of them? Um, uh, yes, yeah, because fac- because variables are nodes and factors are edges, yeah. So it's actually really elegant way to think about it because then you can bring in all this graph theory stuff. Um, can you also see the board? No. Okay, I'll try to write big. [LAUGHTER] Um, but, um, okay. So if- if I say X_2 is definitely blue, like, trust me it's blue. Then what does that let me do? Um, it let's me cross out all the rows where X_2 is not blue. So here, X_2 is red, so I can be like, "You are never gonna happen. I know you're not the case, you are not true." And for you, same thing, you are not gonna happen. And now once I fix X_2, I don't- all of X_2s values are the same. So this doesn't really add information to this table, and I can just drop that data from the table. And what that gives me, is it gives me a reduced table, which is just a unary factor of X_1, and that it can take on red or blue, and then I have a value of 7 if I choose red, and 2 if I choose blue. And graphically, what that looks like is, I'm deleting X_2, and I just have one factor now, X_1, and it has a unitary factor. Um, now the price to pay for that was I had to assume that X_2 was blue. I had to condition on X_2 being blue. Um, so notationally or maybe like programmatically, uh, I think a good way to think about it, is it's kinda like you're taking this- the factor that touches X_2, and you're kinda like preloading it with a value for one of the arguments, and then the rest of it is untouched. So you choose a value for a variable, you remove the value, the variable from the graph, and then you- you stick in that value, you preload the associated factors with that value. Um, so for exep- example, if we were to condition on these, and I'm saying SA is green and Q is blue- as red, trust me. And then what that does is it rips those out of the graph, and everything touching, um, everything touching those conditioned variables turns into some kind of a stump, and you preload the value for those variables in. Um, the only edge that gets removed here is the one that connects SA and Q. And that's because you have a function that takes two arguments, but you've preloaded both of those arguments already, so the- it's done, there's nothing to do there. Might as well not exist. Um, just an example of the new factors that go in. So NT, for example, um, that would- this factor, which you used to say NT and Q can't have the same color. It- it turns into this little thing, which just says NT can't be red. And you know that because you are assuming that Q- you're conditioning on the fact that Q is red. Yeah. [inaudible] independence, is it like two variables have no edge between them or no path that can connect them in the ground? Um, yes, so the question was, is independence no edge? Er, does independence mean is there no edge between them or is there no path between them? And, uh, we will talk about that. Um, so for independence, um, uh, you'd need both true, um, but then we're gonna cover another form of independence called conditional independence, uh, which is just the latter, yeah. How is this different from the idea of extending partial assignments, because actually like starting with partial assignment in the graph, and you're kind of exploring or optimizing the rest of the nodes based on a partial assignment? Yes. So the question was, how is this different from extending partial assignments? And, um, it's not so- that's actually a good point. I think- so conditioning might actually be a case of building up partial assignments. Um, this- so what- I think the main thing to think about is now we've kind of like moved on. So partial assign- extending partial assignments and modifying existing partial assignments, that was kind of like our old world, which is where, like, this graph structure is very perfect and we can't touch it or change it. Now we're in a new world where we're allowed to change the structure of the graph. Um, and so we kind of like left that way of thinking. Yeah, yeah. What drives the decision-making process, so it'd like setting it to green or setting it to red? Um, what drives the decision process of choosing these values? Um, we will get into that. Yeah, yeah. And how do you know exactly to condition those two, um, and that you end in a re- independent result afterwards? Yeah, so the question was, why did we choose to condition these? So in this example it's arbitrary, but soon we're gonna cover ways of choosing them. Yeah. How is it different from forward checking? What? How is it different from forward checking? How is this different from for checking? So in forward checking, what we were doing is- is we were propagating the decision forward and to reduce the domain of existing factors. Whereas, in this one, we are- we are changing the structure of the graph itself, and we're- we're literally removing variables and inserting new factors. Um, okay, so I'll move on. Um, graphically, in general, what this looks like is, if you have a variable that you want to condition, um, you rip it out of the graph, and then everything that touches it turns into some kind of a stub, um, with the value preloaded in there. Uh, this is just a picture of the same idea. Um, yeah, so I guess you could say this is the main- this is the big difference is that whereas in forward propagation, um, we were keeping the existing graph and we're just propagating our decisions to reduce domains. Whereas now, we're- we're literally removing this variable and the factors with it, and then we're adding new factors, um, that are preloaded. Uh, okay, so we are going to- now go to what you're asking about, which is conditional independence. Um, so if you have three- if you have three variables; A, B, and C. So let's say we have A, um, C, and B, um, then you would say that A and B are conditionally independent, because once you condition on C, once you pick a value for C, remember, these turn into stubs or stumps or whatever you wanna call them, Gs. Um, and so these are now independent. They used to not be independent because there is- there is a- they could access each other through C, but now they're independent. There is no edge connecting them, and there's no way to reach each other. Um, and so this- this is just a way of formalizing that, um, this is how we write it. So we condition on C, and then A and B become independent. Um. Yeah. So commonly you say every path from A to B goes through C. And then that means if you remove C, then there's no path connecting them. Um, so in this example, um, you have SA and Q, if you condition on them, then you rip them out of the graph, and then it looks like Western and Eastern Australia are now independent of each other because they- they've turned into islands. Um, and that's writing it mathematically. So there's another notion of the Markov blanket, which is basically saying, "Okay, I want to make- I've chosen some subset of the graph, in this case it's V, and I want to make it independent. What is- what- what are the variables I have to destroy in order to make my variable an island?" Um, so in this case it would be SA and NSW. If you condition on these, then V becomes an island and it's independent. If you wanted to make this subset independent, then you would condition on Q and SA. And the set of nodes, um, that you have to condition on is called the Markov blanket of the set of nodes that you want to be independent. Um, so it's like if you have some part of the graph A and you wanna make it independent of a part B, then C is kind of like the Markov blanket of A if, um, when you delete it, A becomes independent of C. Um, this thing- this is kinda like a set notation thing, it's a set difference. Um, and this is just a [NOISE] way of writing it more mathematically. [NOISE] Um, so we can use these ideas to, um, to like create independent structures now of our data. So we had this example before where it was almost independent, but not quite. Now what we can do, is we can condition on it. So we condition this to be red, and then we can find the maximum weight assignment of the rest of it, which we showed was easy before, right? With- with this example, is just linear, like what's the best for you? What's the best for you? So once we condition on this thing that's making it non-independent, then it be- the problem becomes very easy. So what we do to solve this, um, is we just condition repeatedly. We say, "Okay, I've picked my node, now for red, green, and blue, solve it." Condition and solve, condition and solve, condition and solve. Um, and this becomes very quick, and then you can read off the maximum weight very easy. [NOISE] You just say, "Oh, it's green because that was the maximum weight I found once after conditioning." Yeah. So that the weight there, so the weight that's- some of the weights are holding together, just like one. Um, so the product of the weight. So one of weight. The question was when we talk about weights, what is that mean? And, um, this is- its kinda loosey-goosey, it's not very formally defined. But in this example, I would say it's the weight of the whole graph. So you condition on this and then you find the best values for all of these variables, and then you take the products of all of those unary factors, and that gives you, um, the weight. Is this just a example or is this the actually what the numbers would be, like [inaudible] This is just an example. Okay. Yeah, their numbers are arbitrary. Okay. Yeah, yeah. So if you are going through R, G and B, isn't that the same as doing like all possible combinations, anyway or computationally? Uh, yeah, it is. So- so it is, um, so you [NOISE] cover every solution that backtracking would give you, this would give you two. Yeah. Um, this is much faster, because what you're doing is you're taking a very complicated problem and you're breaking it into easily solvable pieces. You- you're taking an exponential problem and you're breaking it into a linear number of linearly solvable pieces. And in practice it's much faster. It's more just like imitational distinction rather than like space search distinction? Uh, yes. Yeah, sort of, yeah. Yeah. So adding on to that is essentially that instead of doing all possible assignments for all of the variables, you're just choosing a subset of them to like, exhaustively search? So the question was, instead of doing- instead of exhaustively searching all variables, you're exhaustively searching just a couple of variables. And, um, um, yeah, that's an interesting way of- of thinking about it. I think in both- in both ways, if you think about it, you've- it's like, even if you're conditioning for every single variable, you're eventually going to consider all of the values it can take on. But you'd- you're- you're reordering things in a way that's much smarter, and that lets you take care of, like, take advantage of this independent structure, um, in order to do it better. Yeah. [NOISE] Any more questions on this? Do you think it's faster than- Yeah, so- so it is faster than backtracking, and you can see it here. So for example, if we did backtracking here, um, it would be, what? There's three colors and there's seven nodes. So it- it would be this huge exponential blowup. Whereas, if we condition here, then it- it becomes much smaller. Um, okay. So just to summarize independence is when we have A and B and they can't- there's no way to get from A to B. Conditioning is when we- we take, um, a var- a val- we take a variable, plug-in its value, rip it out of the graph, um, and then like pre-load all the factors that touched it. Yeah. I have a question on the last slide. Okay. Let- I'll finish this first. Okay. Um, conditional independence is when you have, um, two blocks in your graph, and if you condition on one part of your graph, then if you rip that out, then these two become separated. And then a Markov blanket is saying, "Okay, I wanna make my variable an island. What nodes do I have to destroy to make it an island?" Yeah. What was your question? Um, I guess, I'm like unsure why it's computationally cheaper because if you try- if you condition on X_1 range of the colors, you see I have to- like- like, you're basically, you still have to come up with three sets of unary factors for each of the flow variables [OVERLAPPING] and you iterate over each other domains [OVERLAPPING] it also seems exponential? So- so in this case- so doing backtracking. So if we did backtracking- [LAUGHTER] so if we did backtracking that will be what? 7_3, right? Is that true? Yes. 3_7? Yeah. [LAUGHTER] And then if we did, um, conditioning, [NOISE] so that'll be 3 for the first condition, right? And then, every time we condition, we do, um, uh, 7 times 3, right? 6 times 3. All right. 6 times 3. Yeah, you're right. 6 times 3. So this is, uh, 6 squared, right? So what is this like? What? Is it still 6 times 3 or 3 times 3? 6 times, [OVERLAPPING] um, it depends. I don't- I don't- I don't know exactly what these factors are saying. Um, this is kind of an arbitrary example. But I think- I think the point- the point here is that, um, this is smaller, right? So this is- this is 3_4 or whatever. Um, so it's faster. Yeah. Yeah. Can you explain how you're getting that value of that computation referring to that condition. Down here? Yeah. Yeah. So this was saying- this is saying that there's three colors for X_1. And then every time I choose a color for X_1, I'm saying that there's, um, three options for each of the six other things. So once I- once I've, once I've set X_1, it turns into this situation. Where now I'm saying, what's the best value for X-1? Okay. So I look at three things and choose the best. Now, what's the best value for X_2, and I look at three things and choose the best. And so there's- now there's six of these, and each one, for each item, I have to consider three different possibilities. Yeah. I don't, ah, you know, it would probably be different depending on what these- what the actual factors are and what they're calculating, but, um, it's just an arbitrary example. [NOISE] Um, okay. So now, we're gonna do elimination. Um, elimination is very cool, I think. So conditioning is saying, I'm gonna rip- I'm gonna rip my variable out of the graph. And then I'm going to plug in whatever value I conditioned on into all the neighboring vectors. Well, elimination says is, is- okay, I'm gonna rip my variable on the graph. But instead of- instead of plugging in a single value across every single factor, I'm going to plug in a different value per factor in order to individually optimize each decision. Um, so I think, I think it's best shown through example. So again, um, this is the thing, we have two variables and they're connected by a factor and they have these weights. Um, if we condition, then, um, we- I think this is- this example, right? So if we condition, we get what we got before. And then if we eliminate, um, here, I'll do the elimination now. So [NOISE] I'm going to redraw this table. So we have X_1 and X_2. We have red, red, red, blue, blue, red, and blue, blue. Um, and they have weights of 1, 7, 3, and 2. Okay. So what elimination is gonna do is- is we- we condition, um, it like internally optimizes the value that we want to condition on based on the other arguments in that factor. So we would say, okay, we're trying- we're trying to eliminate X_2 right now. And we do that by looking at each value X_1 could take on. And then for each value, we dynamically choose the best value for X_2. So first, we look at red. So X_1 is red. What's the best value for X_2 in this case? It would be blue. So we cross out this row. We say, if X_1 is blue, then X_2 is gonna be red because that gives us the biggest. And then, um, just like before, now, for, for any value of X_1, the value of X_2 is already set, it's fixed, it's decided. And so that means that we can drop this variable out of the graph, because we've already like, internally optimized into this function what its value would be. And so this gives us, again, a new table where we have X_1, we have the values it could take on, red and blue. And then we have the weights associated with that, which is now 7 and 3. Um, so in math, what that looks like is, is where we used to have this binary factor, we've now ripped X_2 out of the graph, and we, we have this new factor that's a unary factor now, which internally optimizes over X_2. It says, give me my X_1, give me an X_1, and then as soon as you get my X_1, I'm going to spin through all my values of X_2 and give you the best one that would be the best match for it. So um, what this would look like in pictorially, I guess, is every time you remove of, of- a factor, um, you take all the factors that touch it and you rope them all together, and you merge them, and- into one big vector. And then internally, what that factor is doing is it's optimizing over, um, whatever variable you just removed. So for example, um, if we have- um, so for example, we have this kind of coloring problem, um, let's say, we want X_1 to be red and X_4 to be red. What's the best value of X_3 that we could give? So over here, we say X_1 is red, X_3 is red, that gives us a value of 4. And then 3-4, red, red gives us value 1. So we have a 1 in here. The other value that X_3 can take on would be blue. So now, we go from red to blue, which gives us a value of one, and then blue to red, which gives us a, um, weight of two. And so we multiply those together and we 1 times 2 which is 2. And then internally, what this factor is gonna do is it's gonna maximize over those and choose the value for the deleted variable that maximizes, that internally optimizes it for this local problem, which in this case is red. So just another example. Um, if we did red blue. So X_1 is red, X_4 is blue. And there's two options for X_3, first, it would be red. So if we go from red to red, that's 4. And then from red to blue is 2, so we got 4 times 2 here. And then, um, the next thing would be, if X_3 is blue. So we go from, um, red to blue, which is 1, and then blue to blue, which is 1, and that's 1 times 1. And then we would say, oh, well, in this case, the best value of X_3 would be red. So it's kinda like internally optimizing. Um, in general, what this looks like is you rip the node out of the graph, you take all the factors that used to touch it and you tie them all together into one big factor. [NOISE] Um, and again, that's kind of the mathematical notation. Um, there is another way of interpreting this, um, that might be helpful to some people, um, uh, basically, it's like you- it's like I pick my value and then I'm going to look at all of the- all of the variables in its Markov blanket, and I'm going to repeatedly condition for every- on every assignment that blanket can take on. And then, for each of those conditionings, what's the best value of my selected variable? Um, that's kind of what's going on behind the scenes. Yeah? So if ripping out the variables and constructing new factors leads to like, faster execution times, why don't we just rip out all variables except for one, and then it becomes a unary factor and a unary random variable. Yeah. So the question was, if, if it's faster to rip out variables, why don't we just do it for the whole graph, and, um, we totally will. Yeah, you'll see- it's in another couple slides, that's the algorithm. So like we have this strategy for ripping out variables, and then the next step is to use it to solve the CSP, which is what we're about to do. Yeah. Um, okay. So there's this question, which I think is cool. Um, so if we have some kind of a star-shaped graph, so if we have a setup, [NOISE] let's say we have a setup, um, [NOISE] like this, with- which is a bunch of factors going into our hub. Um, and we have- we have this variable, we'll call it S. Um, do we wanna run elimination or a conditioning on S? I hear some whispers on that, oh, yeah. Conditioning. Yeah, conditioning. Um, and the reason for that is that if we- if we condition, then all these turn into unary factors. And if we eliminate, then it turns into one giant factor, um, which, which is harder to solve. [NOISE] Um, okay. So like you were saying, this is- this is the algorithm that you got out of it. So basically, what you do is you, you loop through all your variables and you eliminate them all in turn. And then at the end of the day, you're going to have, um, one variable to rule them all, and that variable will just hold the best answer. Yeah. [inaudible] to the, the degree [inaudible] can we end up in to a case where if we don't have, uh, like a smart way to choosing [inaudible] conditioning on. Can we end up in a case where we've conditioned over this one, and that one and it turn out that there is no more of possibility for- Yeah. [inaudible] the graph or whatever? Yeah. So the question was, um, wait a second, doesn't ordering still matter, like can't we still end up somewhere that's not good? Um, and that's totally true. So towards the end of this lecture is a discussion on exactly that topic, um, variable ordering does matter, and it's actually hard, it's an NP-Complete problem is to decide the best area ordering. Um, but so I think, um, I'm gonna do one more example of- I'm going to run elimination on a whole graph, um, which I think would be helpful. [NOISE] So our graph, so we have three variables. We have X, we have Y, and we have Z, and they're all connected through a single factor. Um, I'm gonna write out the whole table for this. So we have X, we have Y, and we have Z. And let's say, um, they have two values, they could be A or B. So we have A, A, A, um, A, A, B, I'm gonna run out of space. Um, A B, B. Uh, A, B, A. Uh, B, A, A. Um, B, A, B. Uh, B, B, A, and B, B, B. Is that everything? 1, 2, 3, 4, 5, 6, 7, 8, yeah. Okay. So, um, hopefully, this is readable, um, but and they have weights, right? So we can just arbitrarily say they're, 1, 2, 3, 4, 5, 6, 7, 8. [NOISE] Can people make this out? Seems like. Maybe? So I will go with it. Okay, so what do we do? Um, let's say we wanna choose Z first, we wanna eliminate Z. So what we do in elimination is we say, for all the variables in Z's Markov blanket, we're going to repeatedly condition on the values that they can take on and then we're going to dynamically choose the value of Z that would best match that. So, um, X and Y could either be A and A. So this is A, A. And in this case, the best value of Z would be B, right? So we cross this out. Uh, next, X and Y could be A, B, in which case the best value of Z is A. So we cross this out. Um, it could be B, A, um, which would be B, or it could be B, B. Um, and the best value of Z in this case would again be B. So again, like we've seen a couple of times before, now for any value of X and Y, the value of Z is already decided, it's set, it's precomputed. So we can just drop this variable from our table, which is equivalent to dropping it from our graph. Okay. So now, we're gonna work on Y. Again, we do the same thing. We say for each- for each value in Y's Markov blanket, which is X, we repeatedly condition on all the possible values it can take on, um, and then dynamically choose the best va- value based on that. So if X is A- um, so now, we're comparing- so now, we're comparing these two. So if X is A, what's the best value of Y? It's B. So we can cross this out. Now, if X is B, what's the best value of Y? Um, it's again B. So we can cross this out. And now again, for every value of X, we've decided B and so we can drop this from the graph. And now what we have is, now we have one variable in our graph with a unary factor. Um, so we have X, and X can be A or it can be B. And if X is A, then it has a value of 4. And if X is B, then it has a value of 8. And now what we can do is we can just say, okay, we're gonna choose X to be 8. And then, um, in kind of implementation, all you'd need to do is- is kinda like look back, you can store some kind of backpointers or store your own tables, and you can recover your solution from the whole graph, um, from this end point. [NOISE] So that is- that is variable elimination. Um, yeah. For the second point, why did we go in the opposite order from n to one instead of one to n? Oh, that's- so that- that's basically- that was what I- that's was, um- that kind of is a mathematical way of saying what I just said about the backpointers. So it's like, now we've gone forward and we've eliminated, and now we can go backward and say, okay, X is B. Now that we've decided X to be B, like we look at our old table and be like, what was the best value of Y that gave us that decision? So it's kinda like, once you go forward to eliminate, you get your solution and then you go back and like read off the values as you go. Yeah. Um. Okay. [NOISE] So in terms of runtime, um, basically what, what you can say is, um, so for each variable, um, it's going to have domain to the- so every variable is gonna have domain to the arity the- the factor touching- so the factors touching, um- like the factor you create from eliminating your variable, is gonna have this many rows in it, um, and then there's n variables that you have to eliminate. Um, and with the- why- the reason why it's max arity is because if you have- let's say you had some kind of graph, um, that looks like this, um, thif- this factor would be D squared, the domain squared, and then this factor would be D_3. And so like as you're going- as you're eliminating going through this graph, like this is just gonna dominate D_2. Um, so it's, as you're going through and eliminating in the graph, the arity of the biggest factor you've created is gonna bound your performance. Um, and that's where variable ordering gets in. So what- was it- yeah. So what is plus one? Yeah. So the plus 1, um, I'm not actually sure, I- I thought about, I looked, I looked around and I- I wasn't able to explain it, so, um, yeah. Um. Okay. So but as you're going, order matters, um, because if you looked at this, there's two ways to do it, right? So if you went from the leaves and eliminated each leaf variable first, um, then all the factors you create would have arity 1. It'd be super easy. But if you eliminated X_1 first, then you get this huge giant factor that it gives you this big table of this arity 6, which would be very slow. So the order really matters. Um, in general, it makes sense to eliminate variables with the fewest neighbors first. It's a pretty like sensible heuristic that in practice works well. Um, and we define this term called treewidth, um, which is saying, if you had a factor graph and you're going through doing elimination, um, the maximum arity of a factor that you create along the way, using the best ordering is the treewidth of that graph. Um, so in general, a treewidth is, is very, very hard to compute, um, but for a few cases, you can kind of reason about it. So for example, if you had a chain- um, let's say we had a chain. And if I were going through eliminating this, the biggest factor I would create is just a unary, just that little guy, right? So it's a treewidth of 1. Um, if I had a tree, so it's just like our example before. So if you go from the leaves again, you can create these little factors- um, these little unary factors and so you can get away with 1. Um, if you had a cycle, I'll make my cycle a little bigger. But if you had a cycle, the maximum arity is gonna be 2, um, because even if- even if you kinda work away from the sides, you're eventually gonna end up as something that just connects the last things before you delete them. Um, so it's like you have your cycle and you chew away at both ends and you're gonna end up with something- with just those two nodes relating- remaining. Um, and then if you had a, uh, n by n grid, um- if you had an n by n grid, um, so let's say this just goes on, it's the smaller of the two. And the reason for that is because if you, let's say, you go through and you're eliminating these things, do like that and like that, where you- you gonna end up with something, um, some situation like this. And once you delete this, you're gonna get a factor that ropes them all together. Um, yeah, so that's treewidth, super important, hard to compute. Yeah. You end up with two factors, uh, for [NOISE] in the last two in open cycle. Yes. So when you have a feedback is that a cycle? Yeah. And you're going through elimination? Yeah. You end up [OVERLAPPING] Oh, yeah. You're right. So it's not the first two, it's the- it's not the last two, it's the first two. So when you are- let's say this is my cycle, the first factor I create after I delete that is gonna be a binary factor. Yeah. So thank you for that. Um, yeah. So just to summarize, we learn different ways of solving CSP today. Beam search, which is kind of like a souped up version of backtracking, um, which doesn't always give you the answer, but it's faster. Local search, you get entire things and then you improve them a little bit at a time. Conditioning, um, break up the graph into smaller pieces that are easier to solve. And then elimination, which is where it's kind of like in, you know, conditioning, except instead of choosing one value for all your factors, um, you kinda like dynamically choose the value based on the Markov blanket. And that's CSPs. So we will see you on Wednesday. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Factor_Graphs_1_Constraint_Satisfaction_Problems_Stanford_CS221_AI_Autumn_2019.txt | Two countries, Austria and, and- what's the other one? Hungary. Hungary, right? [LAUGHTER] Because, speaks Hungarian. Yeah. So like, Austria and Hungary, that could be an answer. Uh, so the way you thought about this problem was- if, if, if you think about it, it was pretty different from, from, like normal search problems. You had this constraint in your head that, oh, I have two countries that are right next to each other, that's one constraint. And if you were thinking about, oh, one of them needs to start with an A. So you had a bunch of [NOISE] constraints whe- when, when you think about, you have a bunch of constraints when you think about a problem that- that's like this. Uh, and that makes it pretty- like that, that helps us to use different types of model that could be pretty different from state-based models. So this is more of a motivating example. We are gonna talk about these types of models. So, so far, we have talked about reflex based models, state-based models. So we spent some time talking about search problems, MDPs, adversarial games, and then what the plan is to talk about variable-based models, specifically constraint satisfaction problems today and, and on Wednesday. And then we're going to talk about Bayesian networks next week. So we'll have three lectures on Bayesian networks. Uh, so what's gonna happen is, Reed is going to talk, uh, talk about the CSP, the second lecture of CSP on Wednesday, and then Percy is going to be back next week talking about Bayesian networks, and I will do the third lecture of Bayesian networks. So it will be whole mix of us, talking about variable-based models. You'll see all views of us. So, so that's, that's the plan. Okay? All right. So, um, okay, so going back to our paradigm. So our paradigm is, uh, starting with modeling. So how do we model, uh, various types of problems, and then how do we develop inference algorithms? I tried to answer questions we care about, objectives we care about based on those models. And we have been talking about learning a little bit. So, so if you have these models and they are not full models, how do we go about learning, learning these models? So, so here is just a review of what we have talked about so far. In terms of modeling, we talked about various frameworks like search problems, or MDPs, or games. So these were various frameworks that we had, and, and we had different objectives. So we had things like minimum cost path for search problems or we cared about other things like maximizing the value of policies for, uh, for MDPs, or, or games. So this was kind of some of the frameworks we talked about. And in terms of inference, we discussed tree-based algorithms, and we discuss- we discussed graph-based algorithms. So if you remember, backtracking search was the simplest most naive thing we tried out for our search problems. Uh, for, for games, like we looked at minimax and expectimax, which was also going down a tree. And then you can have more, uh, graph-based type algorithms where, where you're looking at a recurrence relationship, and, and examples of that are things like dynamic programming, uniform cost search, A star. In terms of MDPs in games, we looked at value and policy it- iteration. And then in terms of learning, we discussed a few types of methods for each one of these frameworks. We looked at structured perceptron, Q-learning, TD learning. So, so these are some of the topics that we have talked about so far. So, so if you're midway through the quarter, these are all the cool things we have learned so far, and these are for state-based models. Okay? So state-based models were kind of cool. And we had, we had a couple of takeaways from state-based models. So let's just summarize like two main takeaways from state-based models. One of the key way- key takeaways was, was that when we're modeling these, these state-based models, we had, we had local relationships. So, so our model would specify these local interactions and local relationships that we had between the states. So for example, if I wanted to go from S to A, my neighboring state A, then I would think about what would be the cost of going from S to A. So I had this local relationship between them. And the goal was to do inference and an inference was more trying to like look at a global property. Can I find the, the shortest path from some state to some other state in this whole graph? So, so the idea was, let's actually model and specify these local relationships, and then do inference where we find globally optimal solutions. So, so that was kind of the whole idea of state-based models. And the thing that they use, the thing that they, that made them powerful was this concept of a state. So, so let's just summarize what state was. Well, a state is a summary of all past actions that's sufficient to choose future actions optimally. And, and that's how we define states, that's how we went about states. Okay? And, and once we had state, when once we had the notion about state, then our mindset was, I'm gonna move through these states through actions. So I have states that I can think of them as nodes here, and I have actions which, where I can think of them as the edges in this graph. And the question is, how do I go through one state to another state, and what is the sequence of actions I should take? So, so if you think about a policy, like we were talking about a sequence of actions and, and the sequence actually mattered, right? Like I, I would take this action and another action, and the goal is, I'd say for me, to go from here to the door and I would have a [NOISE] sequence of actions that need to go one after each other for me to, to achieve the task. Okay? So the type of problems that we wanna talk about today, um, don't have, they, they have a little bit more structure. They don't really care about ordering. And that's kind of the key difference. So, so when I asked that very first question of pick like two countries, one of them, the name of one of them should start with A, the other one should speak Hungarian and they're not right next to each other, then the way you think of like all those constraints, all the things that you need to satisfy, you don't really need to like follow a specific order. They're a bunch of constraints, you need to satisfy all of them. It really doesn't matter to start from where. And then that's kind of the idea that a, a variable based model is. And, and we're going to go through this example throughout the lecture. So the example is, it's a map coloring example. So, so the idea is, let's say we have a map of, uh, Australia here. And Australia has seven provinces. So these are all the provinces here. And what we wanna do is, we wanna color this map. So, so the question is, how can we color each of these seven provinces with three colors? I have red, green, and blue. So that no two neighboring provinces have the same color. Okay. So that's, that's a task we wanna do. Okay. And then kind of the key idea again here, is the order of things doesn't matter, right? I can pick any of them [NOISE] and pick a color, and then just go from there. It doesn't matter, like, if I'm, if I'm, it matters in the sense of the algorithm side of the things, but in terms of the model, it doesn't matter to, to include that. So, so here, for example, this is one possible solution. Right? Like I can have the map of Australia. I can have these different colors like red, green, and blue for, for different parts of it, and no two neighboring countries have, have the same color. So, so this is one possible solution that we can get, we can have other solutions, er, to get. And, and our goal is to find these types of solutions. Right? All right. So, so I can think of this as a search problem. I can, I can perfectly think of this as a search problem, where let's say, it starts with a partial solution. And my partial solution is, somehow I've decided to, to choose, I'm just gonna refer to these provinces by their first letters. So I'm gonna choose WA, V, and T. I'm gonna just make them red. And now what I wanna do is, I wanna figure out what other colors to use for the rest of the provinces. Okay. So I can just go down like a search tree. So, so my state here is this partial assignment, and I can go down the search tree and I can choose Queensland as, as my next thing. And, and I'm gonna color that red. So if I color that red, everything looks good. Everything is great. So now, I'm looking at Northern Territories, so NT. I'm gonna pick a color, I'm just gonna color that green, let's say. So [NOISE] I color that green. Then if you look at SA, I only have one option for it, right? Because I've already picked red and I've already picked green, and SA is connected to all these red and greens. So, so the only color I can pick for SA is blue. So, so that is, that is all I have. Then how about NSW? Then that has to be green, right? Because, because I've already picked blue right there, so that has to be green. And, and here is one solution. So I just went down a search tree and picked a solution to this problem. I could have picked some other solution like that decision that I made over there to make NT green, that was kinda random, right? Like I can just pick blue there. So let me just pick blue there. And then I can just have another solution, that's a perfectly fine solution, and I'll have my, my map going. How about I choose, I choose a different color for Queensland? So, so I decided, I decided to make it red, maybe I want to make it, uh, blue. So if I make it blue then NT has to be green because that's the only option I can have. And then when I get to NSW, I don't really have any options for it, right? NSW, I have no colors for it that would work. Because, because green is taken, red is taken, blue is taken, NSW is connected to all three of these. That's not really gonna work. How about I choose queen to be- Queensland to be, uh, green, same story, NT has to be blue, SW I don't really have a solution for it. Okay. Okay, so, so this was just going through this example assuming that it's a search problem, and I have these states that represent partial assignments, and I'm going to pick actions and the actions are going to just give a coloring to the next- to some next variable here, okay? So, so the state is partial assignment of colors to provinces and the action I'm going to take is assign the next uncolored province to a compatible color. So I can perfectly think of this problem as a state-based or like using state-based models, using this particular state and action. But, but the thing is there is more structure to the problem and the structure in this particular case comes from the fact that again ordering doesn't matter. So, so variable ordering does not really affect correctness here. It's just a bunch of constraints. It doesn't matter in what order I'm satisfying those constraints. And, and in addition to that the variables, they're, they're kind of interdependent in a local way. So, so for example if I just look at Tasmania like right here, it's not connected to anything, so I can just pick whatever color I want for that. And it's not affecting the rest of my problem. So I don't really need to have some order to like pick t first or pick t last, right? I can just pick a color for t and it doesn't affect the rest of- the rest of my system. Okay. So the idea variable-based models is, let's, let's kind of make our models- like let's make our models simpler than state-based models. Let's now try to figure out what is this the state thing that's sufficient for us to make, er, make decisions in the future and, and pick actions sequentially. Let's try to have an easier language, er, to represent the model of, of a problem that kind of looks like this. So, so the idea is to come up with this new framework. And, and in this new framework, we're going to have variables as opposed to states. So, so we are going to call these things variables. And we're going to have assignments to these variables. So, so the whole job of modeling is to figure out what the variables are and what's sort of assignment we are picking for those variables. And this decision of, well, what order should I color things or what value should I pick for, uh, for, pick for each province like, like that decision of what order of values should I pick? What order of variables should I pick? I can push all of that to inference. Okay. So, so it's not going anywhere, I'm just pushing it to inference. So another analogy here is you can think that you have a difficult problem and you can have like an ad hoc way of going about it and solving it, an, an analogy in programming languages so that it would be, I would be solving it using assembly language. If you look at state-based models, you come up with the idea of state. You're doing something, something more general and you're doing a lot of work and why are you doing that? Because you have a higher level of abstraction, so when you're using something like state-based models, an analogy to that is maybe you're programming in C. So you're moving the level of abstraction. And when you're using things like variable-based models, it's even moving the level of abstractions a little bit higher. It's, it's even like programming in Python. So, so sure you can do the exact same thing in C too, but now you have this higher level of abstraction to think about problems and that makes your model much simpler. And, and the order of things that can become the problem of inference. Okay. All right. Everyone happy with, with why we want to do state-based or we want to do variable based modeling? All right. So, so I've kind of motivated this but I haven't really said what it is, how we go about solving it. So what I want to do for the rest of the class is, I want to start formalizing variable-based models by this idea called factor graphs . And then after that I want to talk a little bit about inference in the case of state- uh, variable-based models. So specifically, I'm going to talk about dynamic ordering and arc consistency as ways- as heuristics that allows us to, to solve these variable-based models. And then towards the end, I just want to show you a couple of examples, other examples of why variable-based models are so powerful and where they come in and just give you some ideas of like what- some other examples to look at, okay? All right, so, so that is the plan for today. So, so let's, let's start with a simpler example. So let's say that I have three people. Maybe I can draw that here. So I have three people, Person 1, Person 2, Person 3, and each of them they are going to choose a color either red or blue, that- that's what they're gonna do, red or blue, red or blue, okay. And each of them have a set of constraints. So, so the idea is maybe this guy really wants to pick blue. So, so really wants to pick blue. Maybe this third person prefers to pick red, but maybe he doesn't- it's not like as bad as this guy so prefers red. And maybe we want to make sure that they pick the same thing. The first person and second person. And maybe we want to ensure that the second person and the third person, we prefer they pick the same, they pick, they pick the same thing- the same color. Okay. So these are some set of constraints almost that I'm putting on, on this example. And the way we can think about these constraints that I've just laid down on this picture is using this idea of a factor graph. So a factor graph is going to have a set of variables. Okay, and this is like analog of states as, as we talked about in state-based models. It's going to have a bunch of variables. I'm going to have three variables because this person is going to pick something, this person is going to pick another color, this last person is going to pick a color. So I'm going to have variables; X_1, X_2 and X_3, okay? Let me actually write down some of these. So we're going to go over a bunch of definitions for the first part of the class at least. So we're going to talk about factor graphs. Factor graphs are going to have some number of variables. We're going to represent variables with capital letters, so like capital X. So in that particular example the variables that I have are X_1, X_2 and X_3. Okay. And each one of these variables, they're going to- they're going to live in some domain, they're either going to get red or blue, right? So, so each one of these X_is, they are going to live in some domain. So we're going to say, X_i lives in some domain of i. So in this particular example, the domain is just red and blue. So each one of these X_is are going to live in either red or blue. And if I pick a value for it, and if I come in and say, well, this guy picked red and this guy picked blue and this guy picked red, then I'm giving an assignment. So that's called an assignment. [NOISE] So an assignment, I'm going to write it with small x. And it's going to tell me well what X1 took and what small means kinda red or blue. So capital means the actual variable. And X2 and X3. What were they? So maybe for this particular example maybe you're talking about red. Blue and red, okay? All right. So, so that was variables. They live in a domain and then we can pick an assignment, okay? So now I have all these constraints and I can write those constraints as something that's called factors. So these factors are going to be functions that tell me how happy I would be if this X1 takes value red or value blue. So their functions, in this case F1 is a function of X1. So, so I'm gonna write, a factor graph needs factors. And these factors are Fj's. There are some number of them. There might be a lot of them. Fj's of, ah, some X taking some value Xi. Some Xi taking some value Xi or some number of, let me just write the most general form right now. X. And these Fj's have to be greater than or equal to zero, okay? So they are kind of telling me how happy I would be, right? So, so here I would have F1 of X1. Um, so if I really want this guy to pick blue then what would be a good factor to put here? What should I say for F1 of X1? So I can write it as an indicator function making sure that X1 definitely takes blue. Maybe I can write it like this, okay? So, ah, so if it is an indicator function what does it say- what does it tell me? If it is an indicator function, if X1 actually takes blue, then the value of this factor is going to be one. If X1 takes red, the value is going to be 0. So I'm kind of treating 0 as this thing that I don't want and anything above 0 as something that I actually want to get, okay? So, so I'm going to have another constraint. This constraint is going to be F2. It's a function of X1 and X2 that's why it's connected to both of them. So I'm gonna draw these squares as, as kind of like showing where the factors are. So, so the circles are my variables. And then the squares are my factors. These functions that kind of told me what are the constraints? What are the things that I need to satisfy? So F2 is going to somehow encode that they need to pick the same thing. Again it can be maybe an indicator function. Making sure this is- these two are equal to each other. And maybe I'll have F3 of. F3 is going to be a function of X2 and X3. [NOISE] This ensures that they sometimes make the same, or sometimes here kinda means that we can have an indicator function, but maybe if they don't pick the same thing you wouldn't be too sad. So, so maybe you don't put 0 for that. So it would be an indicator function plus some constant. That's one way of going about it. And then X3 is going to be take, prefers red. So it's going to have a factor. That says it prefers red, okay? All right. So let's look at the same thing on the slide. So that's a factor graph. So I can actually look at the values of the factors maybe F1 of X1. Maybe what I want is I want if, ah, for that to be equal to 1. If X1 picks blue I want that to be equal to 0. If X1 picks red. For, ah, the two, they have to agree, for the case that they have to agree that I can define it as an indicator function but if they are not equal to each other, if I'm gonna get 0, I'm going to be very unhappy. If they are equal to each other I'm going to get 1. So I would be happy. And then for the case that X2 and X3 needs to kind of be equal to each other, then maybe we can do something like an indicator function plus 2. This means that if they don't pick the same thing, oh I'll be happy. But like if they picked exactly the same thing, I'm going to be even happier. So I'm going to get 3. And then for the last one similar thing I, I preferred the last person to pick red. So I'm gonna give it a value of 2 to that and I'm going to give 1 for the case of blue, okay? So, so these are my factors, question? Does the factor value matter, or is the only thing that matters if it's equal to 0 or not. So good question. So question is, does the factor value matter or is it just like if it is above 0 or not. In general it does matter like what you were picking. For you're soon going to be talking about a specific case of concerns, specific case of factor graphs where the zero and one is the only thing that's, that matters. So I'm not focusing too much on the exact value. It's just if you get zero that's pretty bad. If you get non-zero that's good. So I'm treating them like that because soon we are going to talk about CSPs, constraint satisfaction problems which are just factor graphs where you have 0s and 1s, you don't have anything above them, okay? All right. So let's try to actually write this up. So, um, here's this environment that you can play with it if you want. Um. Okay. This is visible. Yeah. All right. So here you can define variables. So I have variable X1. It can take value red or blue. X2 and X3 similar thing. They can take values red or blue. I have four factors so I'm going to write up what those factors are. Factor F1 depends on X1. It's a function and it's going to return the result of this indicator. And then a similar thing, ah, I'm going to define, ah, the second factor also as a function of X1 and X2 and it returns a value of the indicator and has all these other factors. And on the right you can kind of see these factors being generated. So we're going to look at this environment even more next time when we talk about more fancier inference algorithms, but for now let's move to-. Let's move to finding our factor graphs. All right. So. Alright. So what is a factor graph? So more formally a factor graph has a set of variables X1 through Xn and each one of these variables each one of these Xi's lies in some domain in this case the red or blue was our domain. And then the factors are going to be F1 through Fn. We have m of them in this case let's say. And, and each of these factors is just a function over X that is going to be greater than or equal to 0, okay? So, so that's a factor graph. It tells us what are the things that we really want. So let's look at one example here. So in this- in this particular map coloring example, the variables or the provinces that we have. We have seven of them. The domain is going to be red, green, and blue. So those are the colors that we can pick. And then the factors. Well, the factors here are just going to be telling us that don't pick the same color for two provinces that are neighbors. So I'm going to have factors that are indicators ensuring that we don't give the same value to two neighboring territories. So we have factors that basically connect every neighboring territory. And again this square here corresponds to each one of these functions, question? Now isn't it the same for all the variables? Not necessarily. So the question is is the domain always the same for all variables? It depends on the problem. Not really. Also we are going to talk about how to reduce the domain as we go. So that's, that's another reason that I'm emphasizing on the domain because when we think about the inference algorithm, the domain is not going to stay the same throughout. If I pick a red for example for WA, then NT is not gonna have red in its domain anymore. So, so the reason I keep bringing up the domain is we're going to look at how to update the domain for the- for the inference algorithm, okay? All right. So this is a factor graph. Um, [NOISE] let's define a few more things just so we have a common language to talk about things. So, so we're going to find a scope. So scope of a factor is a set of variables it depends on. So, so it's really simple. So scope. So I'm going to write scope here. Scope. So it's just set of variables a factor depends on. So for example, ah, for this case if I have F2, it depends on two variables X1 and X2. So the scope of F2 is just X1 and X2. Ah, so in this other case, when we looked at the map coloring example. If we look at F1 as a factor that tells us WA and NT should not have the same color. The two variables that are used are WA and NT, okay? So, so that's the scope. Then now that we have the scope, we can define something else called arity which is the number of variables in the scope. So each one of these squares just how many- how many edges is it coming out of it? That's arity. So in this case, this particular square depends on two variables, arity is two. I can have a setting where maybe I have a factor that depends on three variables, then arity is three. I can have a factor that depends on only one variable, then arity is one. And, and if arity is two then we call it- we call the factor a binary factor. If arity is one we call the factor a unary factor. So just common language. [NOISE] So we have arity and then we have-. Which is- which is the number of variables in the scope. And then you have unary, unary, unary factors. When arity is one, our binary factors and arity is two. So it's just defining things. So for example in this case of map coloring F1 is a binary factor. So in the case of map coloring all our factors were binary if you- if you look at it. All right. Let me go back to that. Here it is. So I have a bunch of factors that just say these two variables should not be equal to each other. So I have a bunch of binary factors and that's pretty much the only thing I have, okay? In this case, I have a binary factor, I have a binary factor, I have a unary factor, a unary factor. All right. Okay. So, so far so good. So- so we talked about the assignments, right? The assignments are going to be a setting where we give actual values to these variables. And an assignment can have a weight that tells us how good that assignment is. So- so remember, a factor tells us how good this particular excite, like how happy I would be if x_2 takes a value and x_3 gets a value, a weight tells me how happy I would be for the full assignment. So- so what it is going to be is like in this case, we can- we- we can look at weight to just be a product of- of my factors. So I'm gonna write- uh, maybe I'll just write it in front of here. So I'm going to define a weight of an assignment x. And the way I'm writing that is I'm just gonna write it to be a product of f_j's, uh, j from 1 through m. So I have m factors, so it's going to be f_j's of x taking assignment x. Okay? So for this particular example, we looked at the tables, and each one of these tables represents our factor. But- but now, if I talk about a full assignment, then I'm looking at what does it- what happens if x_1, x_2, and x_3 take all possible values that they could be taking. So I have eight possible options here. And then I'm looking at a weight, eh, as- as a product of ea- all of these factors multiplied out by each other. So- so remember, I was saying well, 0 is the thing that I really don't want to have. So if I have a 0 ever, like that, that's a super like hard constraint that I'm trying to enforce, and that makes my weight equal to 0. So so- if x_1 ever picks red, that was like a hard constraint. We really wanted the first person to pick blue. So fir- if the first person picks red, then the weight is going to be equal to 0. The other thing we really wanted was the first and second person to pick exactly the same color. If they pick different colors, then my second factor is going to be 0, weight of that is equal to 0. Otherwise, I would have different- I- I would have different weights. Maybe the thing I care about is to maximize the weight, so I'll pick the one, the assignment with- with the value 4. Okay? So going back to this, um, demo environment we were just looking at, um, what we can do is, uh, we can- basically, we've defined our factor graph, and we can actually step through it, and you can play with this, but you can basically get these [NOISE] two different- different assignments that- that give you non-zero weights, and you can pick your favorite. So we're gonna talk about various types of algorithms that allow you to compute these weights. Okay? All right. Okay. All right. So weight of an assignment x is just a product of the factors of that assignment. Okay? And then our objective is to maximize the weight of the assignments. So I- I want- what I want to find is, at the end of the day, what I wanna do is I want to find an assignment. So I wanna find that small x that maximizes the weight of, er, of that particular x. Okay? All right. So going back to the map coloring example. So here, um, let's say that we defined all these indicator factors. So if it is an indicator factor, I'm either going to get 0 or 1, I'm not gonna get anything other than that. Then if I have this particular assignment which kinda looks right, then the weight of that assignment is just going to be a bunch of 1 multiplied out by each other, so I'm just gonna get 1. Okay? So- so if I find a solution to this map coloring problem, the weight of that- that particular assignment is going to be 1. I could have another assignment where I don't get, uh, a good solution. I had two of- two of these- these neighboring territories are going to be have the same color if they're both going to be red. Then in that case, two of my factors are going to be 0. If they are going to be 0, the weight is going to be equal to 0. So for this particular map coloring example, where my factors are just indicators, the only weights I can get are 0 or 1. I can either get 0 or I can get 1. If I get one, I find a solution. If I don't get 1, I don't find a solution. Okay? All right. So we have been talking about factor graphs, they're these more general things. Now, we're going to start talking about CSPs, constraint satisfaction problems, which are just factor graphs where all factors are called constraints, and the factors are going to take value 0 or 1. And the constraint is satisfied if the factor takes value 1. [NOISE] So we talked about factor graphs. We're going to talk now about constraint. I'm just gonna write CSP, constraint satisfaction problem, CSPs. Okay? They also have the same variables as before. And we're gonna pick assignments for them. So same thing, I'm gonna- assignments. But the factors are going to be called constraints. [NOISE] And these factors fj's of x are either 0 or 1, they're not anything else. Okay? And if you find an assignment where your weight is equal to one, then that means that you are satisfying all your factors, and that's called the consistent assignment. So you- we have consistency, consistent assignment, assign- I'm gonna write assignment. Um, that is when the weight is equal to 1. If the weight is equal to 0, then we have an inconsistent assignment. So- so it's either 0 or 1. We have consistent assignments or inconsistent assignments. [NOISE] Okay? So an assignment x is consistent if and only if the weight of that particular assignment is 1. That means, all the constraints are satisfied, because constraints are just give me 1 and 0. I'm multiplying 1 and 0. If anything is not satisfied, then the thing is 0, okay? All right. So, so far, summary so far is we have just gone over a bunch of definitions. Factor graph is the more general case of it. Constraint satisfaction problems is more of an all or nothing kind of a situation. So you have hard constraints, everything is a hard constraint. And then you have- so, um, so for example, if you think of map coloring, you can think of that as- as a constraint satisfaction problem because everything is a hard constraint, right? Like you- you don't want any two neighboring countries to have the same color. So you're either going to give 1 if- if that constraint is satisfied, or you're going to give 0 if that's not satisfied. You still have variables. Factors are called constraints. Assignment weight. If that is equal to 1, we have consistent assignment. Otherwise, we have an inconsistent assignment. Can we just think of the CSP as a- Constrained factor graph is that the idea? It's- it's a more constrained factor graph. Yeah, it is- factor graph is this big picture of CSP is an instance of factor graph. All right. So that was factor graphs and constraint satisfaction problems. So, so let's talk about how we go about solving these. So, so how should we find an assignment? Our goal is to find an assignment. So we have consistency, right? Because, because if, if you are talking about CSPs, we wanna get weight 1, that means you wanna have an assignment that's consistent and makes all my factors 1. So, so how do I pick, how do I pick that? Okay. So, er, so let's look at an example. Let's just like, let's just see how we would do it normally, like if you wanted to solve this. Like, if I was solving this I would pick one of these, like one of these nodes, or variables, I would pick WA, maybe I would say well, let's just pick red, just to see how that goes with that. And then I would go to a neighboring, neighboring node like NT. And I do have a constraint. The constraint is WA and NT should not be equal to each other. So the only thing that tells me is that NT should not be red. So I'm just gonna pick some color, let's just pick green. So then I'm gonna go to some other neighboring, neighboring node, so that's SA. I have two constraints. The two constraints are is SA should not be equal to WA, should not be equal to NT, so it shouldn't be red or it shouldn't be green. The only option I have is blue, so I'm gonna set that equal to blue. Then I'm gonna go to Q, the only option I have where Q is red because, because green and blue are already taken, then I'm gonna go to NSW, the only option I have there is green. When you go to V, again then the only option I have is, is, is red. And then I can pick whatever color I want for T because that's kind of random node out there. Okay. So, so this is a thing that we would probably do if we were to do this, right? We, we would go over these nodes with some order and we would pick colors in some other order, and I know that's important. But, but the way we would do it is just, just pick some order and maybe we'd have some heuristic that picks your order and picks the values and tries to make the constraints satisfied. So, so what we wanna do is we actually wanna spend [NOISE] a little bit of time, uh, talking about doing that, and go- and having actually heuristics that, that tells us what order we should use, we should use for the variables, and what order of values we should pick. So, so we're gonna talk about a few heuristics mainly this time. So, so, so to do that, um, we need to define one more thing, it's the last thing I'm gonna define, and then after that talk about the algorithm. So, so we're gonna define dependent factors. So dependent- so the partial assignment is going to be partially assigning values to variables in this, in this, um, CSP, right? So, so a partial assignment, for example here, could be that WA needs to be red and NT needs to be green. That's a partial assignment, okay? Then I can define dependent factors to be a function of partial assignments and a new variable X_i. So let me depend- let me just write that somewhere, maybe I'll write it, a different color because it's. So we have, um, we have something else called dependent factors, it's D of x and X_i, where x is partial assignment and X_i is a new variable I'm picking. And dependent factors is going to return a set of factors. It- it's going to return a set of factors that, that depend on x and X_i. So, so for example, in this particular case, this we said, this is a partial assignment. Let's say I'm asking what are the dependent factors of this partial assignment and SA? So I'm picking a new variable, I'm picking SA, and I'm saying, what are, what are the dependent factors? And then these are going to be the factors that depend on this new thing SA and depend on the partial- partial assignment. So it's going to be this factor and this factor, right? I'm going to pick the factor that says WA is not equal to SA, and I'm gonna pick a factor that tells me NT is not equal to SA, okay? [NOISE] And that kind of like the idea of dependent factors is that it allows me to, to think about the next thing I should- next things I should be worrying about. So, so if you remember like tree search algorithm, if you would look at children of, of some note. Here we are going to look at dependent factors, because, because those are the factors, the next factor is we should, we should care about, that's why I'm defining these dependent factors. Okay. All right. So, so now this is the algorithm. Kinda I want to write it up on the board because it would be good to have it, [NOISE] but it is a little bit of a long pseudo-code. So, all right. So the algorithm we're gonna talk about right now is, is just backtracking search. It's not doing anything fancy. We're gonna talk about fancier things next time. But, um, you have backtracking search. It does the thing that you expect it to do. So it takes some partial assignment x, it takes the weights that we have so far, and it takes the domains, domains of, of those variables that I have so far. Okay. So if x is a complete assignment, if you have found a complete assignment, then we are going to update the best thing we have and we would return, or we would do whatever you're supposed to do for the problem, right? Like we might have different types of problems here, like maybe the question is find one assignment. If I find one complete assignment, I can, I can just return. Maybe I'm looking for another question which, which tells me count all possible assignments that you can have. So, so if I'm counting assignments then I'm just going to update my counter and try to find the next assignment. So depending on what the question is I might want to do different things when I find my, my complete assignment. But let's say I find my complete assignment, then I update and I'm happy. Okay. Then [NOISE] um, I feel like then we're going to choose an unassigned variable, so I, I should have written this. So if x is complete, then let's say we are happy. Then we are gonna choose, um, an unassigned variable, unassigned, so choose a variable, chosen an unassigned variable X_i. And well, how do I do that? I'm gonna talk about a heuristic to do it. Um, so, so we'll talk about that, but let's say I have some way of figuring out what is the next variable I'm picking. And then after you pick the variable, you're gonna pick some value for it, right? The map coloring. You're gonna pick a province and you're gonna say red. So how, how do you know it's red? Like how do you know the val- the next value you need to pick is red? Well, that comes from another heuristic, uh, which says order, values, and domain. So values would be red, blue, green. So those are my values, right? So ordered the values that are in domain I, um, [NOISE] I've chosen X_i. So, so you picked up the next i, maybe the only colors that you can use right now are red and blue. So, so then you are going to order red and blue using some heuristic that I haven't talked about yet. But maybe some heuristic says, you should use red first and then you- you'd use red first, you'd order it in that domain- in, in that order. And then for each of these values in this order, so for each v in this order, that you've decided, you're gonna update your weight, or you're gonna have this Delta weight value. And this Delta weight value is going to be product of your factors, okay? [NOISE] And these factors are factors of your partial assignment whatever you've decided so far, maybe you, you have assigned two colors for two territories already and you're looking at the third one. So it's going to be the partial assignment union whatever value you are looking at for this new X_i that you're trying to pick, maybe a color for. Okay. And, and what are these f_j's that you are looking at? Well, these f_j's are going to be the f_j's that are interdependent factors of the partial assignment and your variable, that's why we defined dependent factors. Because these are the factors that we care about, these are- I'm not gonna look at Tasmania if I'm not looking at that part of the graph, I'm just gonna look at the things that depend on my current partial assignment and my, my, my, er, my X_i. Okay. If Delta is equal to 0 return, or continue, 0 continue. So that means that this assignment you- continue, continue. Er, this means that this is a particular value that you have picked just made everything 0, it didn't work. So, so you should try other things. The other thing you're gonna do is if this value works is you're gonna update your domain, so we're gonna talk about how to do that. That's the thing that's going to save you- save your time. Because like you have now found out that you only need to care about colors red and blue, and you don't need to worry about green. So, so that- that's updating the domain, making sure that you don't need to worry about all the colors. And then after that, you're just going to backtrack on this new thing. Backtrack on this new thing. So on this new thing is X union you've picked value v for X_i. So this is your new assignment you have extended your assignment by value v. Your weight is going to be whatever weight you started times Delta, that's weight Delta, and then you've updated your domain, so you're just gonna use domain prime, okay? Domain's prime. So this is domains of everyone's, like domains of, uh, all the other nodes. All right. So, so we're gonna talk about this a little bit more. So- but this is the basic of, of the algorithm. Okay. So gonna first talk a little bit about updating domain. So, so how do we update domain? So, uh, one very simple way of updating domain is, is this thing that's called forward checking, which says well, if you pick a color, so let's say that you pick W to be red, then just look at the neighbors of WA and, and then see if you can update the domains of them. So this is the simplest thing I can do, right? Like I've picked WA, I've decided WA is red. So the thing that I'm gonna do is I'm just gonna look at the neighbors and the neighbors are NT and SA. They cannot be red, so I'm gonna to just update their domains to be red- er, to be blue and green, I just drop red. Yeah. So, so that's like the simplest thing while would do so maybe I'll write it in different color. So what option is this forward checking approach for updating domain. Okay. So let's go further. So maybe now I'm at NT. I'm deciding NT to be green. If I'm deciding NT to be green, I'm gonna look at neighbors of NT. So I'm gonna look at SA and Q, they cannot be green anymore. So I'm gonna drop green. Okay. I'm, I'm gonna look at Q for whatever reason. And Q, I'm going to pick blue for Q, because I want to pick blue for Q. And, and then I'm gonna look at the neighbors, and my neighbor SA does not have anything in its domain. So I realized that at this point, like this particular assignment is inconsistent. I don't need to worry about the rest of the nodes and when what I'm picking for the rest of the nodes, it's kinda like equivalent to pruning, like I don't need to worry about anything else, because I've just found out that this- this assignment does not work. Okay. So that's kinda the whole idea of updating the domain. So, so forward checking is the idea of doing one step lookahead. So after assigning a variable X_i, you wanna eliminate inconsistent values from domains of X_i's neighbors. So you want to reduce the, the domains of X_i's neighbors, uh, and if any domain becomes empty, then, then you don't recurse on that. And, and when you, you unassig- something to notice is, if you're unassigning X_i, you have to restore the domains. So, so because you change the domains if you're unassigning, if you're deciding, uh, green who was not the color to go then, then, then you got to- you got to update your domains, okay? All right. So the other question was this heuristic. All right, so this heuristic updating domain, one way to go about it is forward checking, just update the neighbors. Another, um, place that, that we need to, uh, pick things wisely is choosing the unassigned variable. So which one- which, which unassigned variable should I start off? So, so which variable to look next? And, and again, one heuristic to, to look at here is to pick the variable that's the most constrained variable. So, so choose the variable that has the fewest consistent values. So, so you are going to pick the one that's the most constrained variable. Why do we wanna do this? Why would I pick the most constrained thing? Probably because of less options. Yeah. So you're left with less options. And, and, and the idea is if I'm going to fail, let me just fail early. Like if this is not gonna work, let me just find out that it's not gonna work early. So, so that's the whole idea of it. And in this case, like if you are left with this option where we- where we choose red and green here, and now we wanna pick what should I look at next? I should be looking at SA because that only has one value. So if that's not going to work, well, nothing else is going to work, right? So, so we want to choose, choose, uh, a variable that has the fewest consistent values. And again, the reason this works is, is if we have some number of constraints in our factor graphs. So, so these are more general for factor graphs too. Like everything I'm saying is not just about CSPs, it's about factor graphs. Um, and, and the reason this works is we have some constraints, right? We ha- we have some, some of these factors are going to return a 0, because they are going to return a 0, that is why I, I, I would like to follow a heuristic like this because that allows me to not look at everything. So, so this, this heuristic only gives us benefit if we have some factors that are constraints. Okay. All right. So, so that's one heuristic. The second question is, okay, so now like using most constrained variable, I pick my variable, what value am I going to pick for it? And, then for value, but it's interesting because for value you want to pick the least constrained value. So- and, and the reason again is [NOISE] you pick the most constrained variable because you wanted, you wanted to know if you're going to fail, you wanted to fail early. But now you've committed to that variable. Like now you're going with that variable. So you might as well- you, you have to like assign a value for it. So you might as well pick the least constrained variable here, to, to leave options for, for the other variables around you. So, so an example here is, and, and how can you think about- so, so an example here is you're going to look at, um, [NOISE] this, this setting where, what is it, you're picking Q, right? And, and you want to choose what color to, to use, what value to use for Q, right? You can- you can color Q red. If you color Q red, you're gonna do this forward checking, and if you're gonna do forward checking, you are going to update the domains. And when you update the domains, you have two options here, two options here, two options here. So that could be a measure of consistency. So you have six consistent values. If you decide to use blue for Q, what's gonna happen is you are going to update NT, and, and that's going to have one value, SA is going to have one value, and SW is going to have two values. So you have 1 plus 1 plus 2 pl- and that's equal to 4 consistent values. And, and you're gonna, you're gonna basically pick the one that, that leaves the most options possible. So you're going to order the values, the colors values here refers to colors, of selected X_i by decreasing number of consistent values of neighboring variables. [inaudible]. Yeah. Yeah. So it's the cardinality of the domain of neighbors. Yeah. And, and one other thing is like these heuristics are only going to work if you are doing forward checking. If you're not updating our domains they're not going to give us any benefits. Okay. And, uh, also another note about this particular heuristic, uh, which is for ordering the values, the only like place that this is actually going to give us some benefits is when you're working with CSPs when, when, when we actually have everything as constraints. Because, because if we don't, we actually need to go through all the va- all the values and then figure out where the value of the factor is for, for them. So, so, so this is only going to be beneficial when we have- when we have everything as, as, a constraint. Just a question, so when we are doing all of this, we are not actually copying anything, right? Well, we're it's just, it's just one possible what if we find something without worrying about [inaudible] [OVERLAPPING] So it is a recurrence. Other optimal, more optimal solutions. Uh, yeah. So, so depends on what we were doing, right? So, so that's kind of this pa- this part. So, so the question is are we finding for the optimal solution, are we finding for S solution? It depends on like- and that's kind of this line. If you find S solution and you're happy with that one solution you can just like return it here and be happy. If you want to find the best solution and you need to like iterate this multiple times, then maybe you have like a counter here that still like keeps iterating. Um, for CSPs you want to find S solution because, because, because we, we just want satisfy the constrain- constraints. But if I have a factor graph I actually want to optimize my, my, my weight. All right. Yeah. So, so yeah. So the, so the idea of this most constrained variable is we must assign every variable. So if you're going to fail, let's just fail early, it's kind of similar to pruning. And the idea of, uh, what order we are picking for, for values is we are going to pick values for the least constrained value. Uh, so and, and kind of the reasoning behind that is you've got to choose some value. Like, like we have to choose values for all of these things. So, so choosing, uh, so, so choose a value that's the most likely to lead a solution for everything. Okay. And this is what we just actually said. Okay. So, so going back to this, this algorithm, now we have a heuristic to, to follow for all these three different red lines. And, in doing so we're just doing backtracking, and then we can update this and, and just go through it, and it does- it does find a solution. Okay. All right. So, um, so now I want to spend a little bit of time talking about arc consistency. So what arc consistency is, is it's just a fancier way of doing forward checking. So, so we talked about a heuristic for this one, a heuristic for this one, the only algorithm we are talking about today is this, that's, that's the only thing. And, uh, we said, well, in this algorithm we gotta update the domain, the way we have been updating the domain is just looking at the neighbors and trying to update the domain using forward checking. So another idea is to do something slightly better which is called arc consistency. And arc consistency doesn't just look at the neighbors, it goes through the whole, the whole, uh, the whole CSP, and tries to update, uh, the domains of even like further nodes ahead of us. So it doesn't just look at the neighbors. So, so that- that's what this whole section is going to be about, how to do arc consistency. Okay. All right. So, so the idea of arc consistency is let's eliminate the values from domain. So, so I have this, this giant domain, I don't want to go over all those, uh, values. Uh, I have a for loop here for all the values. If I can update my domain, I have less things to iterate over that's going to be much better. So let's just try to reduce branching. Okay. So, so here is an example. So let's say that I have X_i and X_i lives in- so I'm looking at X_i and X_j, and X_i takes val- the, the domain of X_i is 1, 2, 3, 4, and 5, and then the domain of X_j is 1 and 2. Okay. So now what I wanna do is, um, I had a constraint, the constraint is X_i plus X_j is equal to 4. So if this is my current domain of X_i, I don't really need to worry about all these values in X_i because the constraint tells me, well, 5 never works because X_i plus X_j has to be 4, so that's not going to work. This one is not going to work. The only way for things to work is to have 3 plus 1, and 2 plus 2, and that's it, right? So, so the only variables that I actually need to worry about for domains of X_i is, is 2 and 3, not 1, 2, 3, 4. So, so what I wanna do is I wanna take the domain 1, 2, 3, 4, and 5, and reduce that to just looking at 2 and 3. Because those are the only values that I should actually care about. And this const- yeah, because this constraint is kinda enforcing that. Okay. So and enforcing+ our consistency basically tries to get to the, this smaller domain. Okay. So, um, all right. So a variable X_i so let's actually formally define this. A variable X_i's are consistent with some variable X_j. If each, each, uh, value X_i in the domain of- for each value X_i in the domain of X_i there exists some X_j the domain of X_j. So, so the factor is equal to- is not equal to 0. So basically it's ensuring that everything is going to be consistent. So, so if you have inconsistencies, remove things from the domain of X_i. So our consistency ensures that if there are any sort of inconsistencies between two variables X_i and X_j's, it's let's say it starts from X_i, and it tries to remove for- from the domains of X_i, uh, to, to make sure that all factors are not equal to 0. [inaudible]. So we start from 1 and- so we pick x_1, and I will try all, all these other variables, xj's and values of them, and then we keep, like, iterating. We do iterating over all of them, but we gotta like pick one, and update the domains of that. Okay. Yeah, so, so what we're gonna do is we're gonna just write up the function, enforcing our consistency. And it's gonna remove values from domain of i to mix- it make xi consistent with respect to some other xj. Okay. So, so the only thing I'm touching is domain of xi. All right. So, so let's actually, like, go over an example of how this works. And then we're going to look at the pseudocode for it. So here's our example. I'm gonna start from WA. I'm gonna pick red for it. Okay. So that's my current domain for WA, is red. If I was doing forward checking, what would I do? I would just look at NT and SA. I would update the domains of NT and SA. So now what I'm gonna do is I've realized that NT and SA their domains are changed. So I'm gonna push them to the, to the same- to the same list of things I have, and I'm gonna look at each of them and see the neighbors of them too. So the arcs that come from them. So I'm going to look at NT. Um, well, that is right here. Actually, it's too soon. So, so e- everything looks consistent there. Everything is great. I can't update anything more. I'm going to pick NT now. Let's say I decide NT is green. So NT is green, I'm gonna look at neighbors of NT. So neighbors of NT are WA. WA is red. Everything is great. SA has a green. I need to get rid of that green because it can't be green anymore. Q has a green, I need to get rid of that. So let's update that. So Q and SA, their domains are touched, right, their domains have changed. So I actually need to look at them, and then see how the domains of their neighbors are going to be affected. For example, I can look at SA, and I can see well, SA is, is blue. The only way for SA to be consistent with the rest of these guys is that they don't have a blue in them. So I'm gonna remove blue from Q and SW and V. Because, because they cannot have blue for these two to be consistent. Again, if SA here is kinda of my xi. So I'm, um, sorry. It's actually my xj. So I am gonna pick xiQ here, and I'm gonna update the domain of xi, so it becomes consistent with SA. Right? So I'm gonna like pick- change the domain of Q, get rid of blue. I'm gonna change the domain of NSW, get rid of blue. I'm gonna change the domain of V, get rid of blue. Okay. So what has updated? Q is updated, NSW is updated, V is updated. They're gonna go to- go back, and I'm gonna go through them again and see if, if their neighbors need to be updated. Okay. So going back to, to, to Q, Q is red. NSW's domain needs to be updated to be consistent with Q. So I'm gonna remove red. NSW's domain is touched. So, so now I gotta go back to V. V is going to become red, and then T can take any value that it wants. So if I do like this full, like enforcing our consistency here, I'm gonna end up with, with something that looks like here. So all my domains are kind of pruned, and I have, I just like have a solution, right. Like I don't need to actually iterate over any values. And this is just done by, by updating the domains. And then doing this arc consistency approach, rather than doing backtracking search. So, so all of that is done in this step. Okay. All right. Yes? [inaudible] solution, go back and make NT blue as well. Uh, so, so if you wanna, if you wanna actually- so, so this whole, like, pruning is only, like, useful, right, if you want to find best- like a solution in a CSP, but if you have a factor graph and you actually- if you have a factor graph, you need to actually try out all these values to see what is the value you're gonna get for each, each one of the colors. If we did forward checking instead, we actually would have arrived at the same conclusion here, right? It would have just have taken more steps like filling more of these different- If we were doing forward checking, we had to do the, like, we actually had to do the algorithm. Like, like we wouldn't get to this, like, we would get to this much later because if, because if you are doing forward checking, we would just look at the immediate neighbors, we'd update the domains, and then we'd go to the next, like, nodes in the neighbor- neighbors and do backtracking search again. Here, like, I'm not- I haven't, like, called back-, like, I'm here. I've updated by domain. And I'm with that scenario, and I haven't called backtracking search yet. All right. So yeah, so forward checking is kind of a simpler version where we're assigning xj to be equal to xj, and, um, and you're enforcing arc consistency on all the neighbors of xi with respect to xj. Arc consistency, what it does is it repeatedly- well, there- there are different algorithms that try to do arc consistency. The particular algorithm we were talking about in this class is called AC-3. It's just the most useful- like, the most, um, common way of doing arc consistency. And what it does is it repeatedly enforces arc consistency on all the variables. So, so it goes over everything pretty much. So, so what it does is you're gonna add xj to your set. Then while set is not empty, you're gonna remove an xk from, from that set. And for all neighbors, let's call them xl of, of this this xk that you have picked. For all the neighbors, what you're gonna do is you're gonna call enforce arc consistency on xl with respect to x here- xk. Okay. And then if your domain is changed, if don- you change the domain of domain of l, then you're gonna add that back in. And that's kinda what we're doing in this previous example, like, we, we kept adding the nodes back in. So, um, yeah, so in terms of complexity, um, of this algorithm or worst-case scenario, it's going to be order of e times d cubed, where e is the number of edges and d let's say is the number- maximum number of values that you can have. So, so, so the reason it is that is, when you're enforcing arc consistency, this line takes order of d squared. Let's say you have d values. For each of them you have d values, you need to co- consider all that combi- all those combinations. That's d squared. You are, are doing- going over all the edges, right, so, so you have all the edges. So that's ed squared. And another thing to notice is, you're sometimes adding these things back in the set. Well, why are we adding them? Because their domains can be changed. Their domains can be changed at most d times. So that's that extra d. So, so that's order of ed cubed. If you're interested in it, you can look at the notes for it. That's worst-case scenario. In general, it doesn't take that long. In general, like, I'm not gonna keep, like, adding the same value, like, a million times, like, back in. Or the same xl back in my set. Um, in general it's much faster. In practice, it's much, much less. Okay. All right. So, so again, it's a heuristic. It's not the best thing in the world. Like, if- I, like, ideally, you would have wanted AC-3 to not return a solution if there doesn't exist a solution. But, but here, for example AC-3 is not being very effective. Here's an example. Right. So you have these three nodes, and let's say you are left with these domains. So blue and red. If you're enforcing arc consistency, the domains are not gonna change. These domains are very consistent with each other. But, but there is no solution that actually, uh, you can find here, right? Because if you choose blue and red here, you don't really have an option for the third one. So arc consistency is actually not going to, uh, be able to figure out that this, this doesn't work. And there are more complicated versions of arc consistency that consider, um, uh, that, that go beyond these binary relationships, uh, but they are, they're going to take exponential time. So, so our consistency is simple. You run it, it's usually useful, but it's not gonna find everything for you. Okay, okay. Yeah, so and I'm kind of the whole intuition of arc consistency is we are looking at this graph in a local manner. And locally we're trying to like update our domains to be more efficient, but it's- it's not, it's not gonna give us a global answer. Of course, it's not gonna give us a global answer. Because if you want it to have a global answer, we had to reconsider the relationships of all arc- all arc constraints with respect to each other. Uh, but it's, it's basically making sure that locally at least everything looks good. How do you figure out when you should or shouldn't use AC-3? When you should- so in general you can use- so I would- in, in general I would say use AC-3 because it's going to prune things usually. Um, if, if you have a lot of dependencies between, like, if you ha- if you have like the circular type of dependencies, it's not gonna figure everything out but it's usually just going to be useful. So, so running it in practice, it doesn't take that long. Running it is usually going to prune part- part of your domain. So I do recommend using it, but it's not gonna figure out everything because you have- you have connected. Like everything is connect- everything is connected with everything. Then you have dependencies between all your variables. All right, okay. So, so summary so far is, uh, well, we've been talking about backtracking search on partial assignments, we talked about dynamic ordering, so how to order our variables and how to order our values. We decided to order our variables based on the most constrained variable because if you're failing, you wanna fail early. And we decided to order our values, like, if I'm picking red, blue, or green, uh, based on the least constrained value because if you're- if you've, if you've decided to pick a value, you should try to succeed. So, so that's kind of the intuition behind it. And, and look ahead is useful, forward checking is, uh, one way of doing it. So it enforces arc consistency only on neighbors, our consis- cons- consistent AC-3 enforces arc consistency, on neighbors and their neighbors and just goes over all the arcs that, that we have in the- in the graph, okay? All right. So that was kind of the set of algorithms I wanted to talk about, but next time we're going to talk about more, more inference and learning type, types- type of algorithms for CSP. So now what I wanna do is, I wanna spend a little bit of time talking about modeling. So, uh, we've talked about two examples now, right? The, the map, the map coloring. And this one is also, like, picking colors. These, these are the examples we have talked about so far. So, so let's look at another example. So, so let's say that we have three sculptures, A, B, and C and they're gonna be exhibited in a museum or in an art gallery and, and I have room 1 and 2. So they can be either in room 1 or, er, room 2. And I'm going to have a set of constraints, so maybe my constraints are; sculpture A and B cannot be in the same room, sculpture B and C must be in the same room, and room 2 can only hold one sculpture, okay? So, so these are my constraints. How would I go about this? Well, I need to write a bunch of factors. I need to write- I need to actually model this. So, so let's try to do this. [NOISE] So this was my domain. So I have three sculptures. So I am gonna find variable A, right? So that's sculpture A, it can be in room 1 or 2, okay? Then I have three sculptures, so I'm gonna have variable B and variable C. Each one of them can be in room 1 or 2, okay? So now I gotta define factors, right? I had all these constraints. One of the constraints was A and B cannot be in the same room, so, so that's a factor, okay? Let's call that f1. It's going to depend on A and B, right? It, it cannot- A and B cannot be in the same room. Er, what is that factor? It's a function, right? Over A and B. And that function is going to return something. What should it return? It should return A not being equal to B, okay? So, so that's one factor. What else do I need? Let's just make sure that everything is okay here. So, so far what I've done is I've defined A, B and C. They can take values 1 and 2. I've defined one factor that connects A and B. I'm gonna define another factor f2 that's going to connect B and C. And I really want a sculpture B and C to be in the same room. So these are local variables but let's just be consistent. So what I want is B and C to be equal to each other, okay? So to be in the same room. So that's factors f2 that's just created here. And what was the last thing I wanted? Each room, one? Yeah. So every room gets, uh, one, right? Was it every room or was it- actually I don't remember. Second room. [OVERLAPPING] Second room. Okay. Second room only gets, um, only gets one. So that's a factor that depends of- on all three of them, right? It depends on A, B, and C. And one way to enforce that is- what I am gonna say is, well, if A is in 2 or if B is in 2, or if C is in 2, right? So I'm gonna- I'm gonna write indicator functions if A is in 2, B is in 2, C is in 2. And if I add those up, well, that should be what? That should be less than or equal to 1, right? Because I don't want there to be more than one of them in, in a- in one room, okay? And this is enforcing that. So okay, I have this third factor. This third factor is not a binary factor anymore, right? It depends on all three of them. And then if I step, then, we're going to talk about these algorithms next time. But, um, here is the assignment that, that you're gonna find. So we're gonna find that, uh, sculpture A is going to be in room 2, B is going to be in room 1, C is going to be in room 1 and, and, and that satisfies all the factors that we just like wrote, okay? So if you're interested in writing up more models, use this environment, it would be cool. So, so that was another example of, of CSPs. So now I want to talk about one more example, so, uh, I think two more examples. Uh, okay. So this is an event scheduling example. So, so the event scheduling example is, I have E events. Let's say these are classes or yeah, different courses that you're taking. And then you have T times slots. So you have E events and T time slots, okay? And you wanna schedule- you wanna schedule a time slot for an event that- that's what your plan is. So it's a scheduling problem. And you have one of two constraints. So the first constraint is, each event must be put in exactly one time slot. So each event in exactly one time slot. One time slot T. So that's one constraint. Another constraint that you wanna have is, maybe we want each time slot T to hold at most one event because we don't want them to overlap. So each time slot T you want that, uh, it can have at most, uh At most, uh, one event, at most one event. So this is another constraint, at most one event. And then maybe I have a set maybe, maybe event E is allowed in time slots T only if event E and time slot T are in some set that someone gave me some A set. So, so I have another constraint that ensures that some E, with its time slot, is in some predefined fixed set that someone gave me. Okay. So, so these are some of the constraints that I have. And what I wanna do is I want to, I want to formulate this problem as a CSP. So, so how would I go about it? What should be my variables? What should be my variables? [NOISE] Events? Events. [NOISE] So we're gonna go with events. Okay. So let's say that- so we can actually have multiple formulations for this. So one formulation, maybe the most natural formulation here is to say that my variables are going to be events. So those are going to be my X_es here. And every event can take a time slot, so the value that it's going to get is 1 through T where T is the time- we have T different time slots, okay. So then if I, if I start with this, if I start with a setting where I'm saying every event is a variable, then I kind of get this first constraint for free. So each event E is, is in exactly one time slot because I have my variables E, they're not gonna get multiple values assigned to them, they're gonna get one assignment. So if they get one assignment, I kind of already get this one for free. The second constraint is this constraint which makes sure that each time slot can have at most one event. So to ensure that, then I need to make sure that X_e is not equal to any some other X_e prime, right? Because, because X_e is my variable, my event variable, it's gonna get a time, time slot value, the two time slot values for two different events should not be equal to each other. So the constraint that I have is X_e is not equal to X_e prime, X_e prime. Okay. And how many of these do I have? Well, I have like order of e squared of them, right, because I have- let's say I have e events, so I have- so then I have e times e options here to make sure that they're not equal, equal to each other. So I have e squared, uh, binary, uh, factors, okay. So, um, and then I have another constraint which tries to ensure that these, these events and their time slots which is the X_e value is going to be in some set A. You can kind of treat this as a unary factor, so you have some number of unary factors, you have e unary factors here. Okay. So, uh, and, and the, the, the number of variables that you have is, uh- the number of variables you have is e but their domain is size T. So, so it's, it's good to think about this. Because if you have multiple choices, so I'm gonna talk about the second choice in a second. But if you have multiple choices for modeling this- it's a good idea to think about what type of factors do you have. How many of them do you have? So here are like the worst-case scenarios. I have e squared binary factors. Okay. So I have another option, right? So in this option what I did was I took, um, the events as my variable. The second way to formulate this is to say, well, maybe my variables are just the time slots. So, so maybe I'm gonna go a different, different approach, take a different approach for modeling this. I'm going to call it Y_t, Y_t are the time slots. So I have variables for time slots. Each one of them can either take an event or maybe there's no event added, that's empty, empty value. So, so they either take an event or no event. And if I model this, this problem using, using the second approach, then I get the second, the second one, the second constraint for free because my variables are again time slots, so I can, I can, I can satisfy the second constraint. And then for the first constraint, I actually need to write something for the first constraint which says each event is in exactly one time slot. So I'm gonna write a constraint that says Y_t, this time slot is going to get an event for exactly one t. So, so this particular constraint that I have, it's how many variables does it have? If I want it to be exactly one time slot. Remember the sculpture example? I wanted it to be exactly in one room. It needed to depend on everything else. So this is going to be a t-ary constraint, right? So I have t variables here. So previous formulation, everything was binary or unary. Here, I have a t-ary constraint. I have less of it but I have a t-ary constraint. Okay. And then I'm going to have another constraint to just ensure this, this last, this last constraint. Okay. So, so one way to think about these two different approaches is how many of these constraints do I have? So, so we just saw that we have a t-ary constraint here, one thing that you can actually do, and I have the slide afterwards about that, is if we have some, some n-ary constraints, some t-ary constraints, some constraint that depends on t number of variables, I can actually change that to order of t binary constraints. So I can actually like reduce down to binary constraints. So, so I can make these two algorithms- not algorithms, sorry. Make these two models to have all binary or unary constraints. So that part is fine. But one of them is going to have t number- like order of t number of factors, the other one is going to have order of e type factors. And the question is, well, which one should we use? And it really depends on if your e is greater than t or t is greater than e, right? So if you have, if you have more time slots than events, if you've a lot of time slots and you have like five events, let's say that you want, you want to set, then you should use the first algorithm because that, that was where we had order of e squared number of constraints. But if you have it the other way around, which is again less natural but maybe you don't want to, you don't want to- you're okay with not assigning all events a time slot, so, so if you, if you had it the other way around, then, then you can use the second formulation. So the point of it is you might have different ways of formulating a problem, you should use the one that, that is the most beneficial depending on what, how many constraints you have and then so forth. And then one last thing to, to- before, before we head out, so I just said if you have an n-ary constraint, we can actually write down binary constraints that are equivalent to this. And the reason is usually our algorithms require having binary or unary constraints. Here, I have, I have a setting where I have this or between X_1, X_2, X_3, and X_4. So the way to make this- makes the binary constraint is what we can do is we can define an auxiliary variable. So, so I'm going to define a new variable, and this new variable, I'm going to call these A_is. And these A_is are going to be just the result of- the or of A_i minus 1 or X_i. Okay. So what's happening here is, let me just draw this real quick. So I have a setting, ah. I'll just do one. So I had X_1, X_2, X_3, X_4, I can have an n-ary constraint that connects all of these together to one factor graph. What I can do is I can actually define new variables. So I am defining that many new variables, A_2, A_3, A_4, and then I'm ensuring- and I'm defining new factors where A_1 is the result of these two ors. So I'm going to just draw this like this. A_2 is the result of or of these two variables. A_3 is the result of or of these variables and so on. This is not binary, right? What is there right here? It's three. So we need to do one more step, like, after you're defining these auxiliary variables, after that, we need to define- we need to do one more step where we define a new variable B which kind of represents A_i and A_i minus 1. So I'm going to replace these two with just one variable. I'm gonna call it B_1 and I'm gonna just connect that to X_1. I'm not going to draw it, but you get the idea. So, so B_i's are just going to be representing A_i minus 1 and A_i. And that allows me to have binary, um, binary factors here. Okay. And, and by doing so, I'm adding actually one more constraint. I actually need to add a consistency constraint that makes your B_i minus 1 of 2 is equal to B_i minus- B_i of 1. Just ensuring that like pre and post are staying the same as we're moving through the graph. So that's another reinforcement. All right. Let's chat next time about this more. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Conclusion_Stanford_CS221_AI_Autumn_2019.txt | Welcome to the last lecture. It's a smaller group of people today. So, um, yeah, so just a quick announcement. The project reports are due next Friday, so just make sure that you return those. And yeah, Poster session was awesome. So I showed up for a little bit of it, but the, the posters that I saw was really amazing, and I really enjoyed like talking to the groups I talked to and good job on all the projects, it was great. Uh, we're going to have a Best Poster Award thing too. And we're going to announce that on Piazza later. So just an announcement on that. Okay. All right. Cool. So, uh, let's, let's conclude the lecture, so CS221. So the plan for today is we're going to do a quick summary of what we have talked about. So general summary of the class, like all the things that we have learned. And then I wanna talk a little bit about some of the next courses that you might wanna take, uh, after taking 221. Uh, if you were in 229, I know in the morning they, they went over a bunch of next courses, uh, from the perspective of 229, all the AI courses one would wanna take. We're kind of doing a similar thing here, but from the view of 221, what would be some of the next courses, that would be good to take. And then after that, I wanna talk a little bit about the history of AI. We did this in the first lecture. So it would be a little bit of review of that. But then, I wanna talk a little bit also about some of the next directions that might be interesting to think about and some of the research that is done currently in various topics of- subtopics of AI, and what are some of the problems that people are struggling with. So it would be fun to think about that and, and if you're interested in any of that, you can go do research in that area or take classes in those particular areas. So that's kind of the plan for, for today's lecture. It is gonna be shorter than usual. So I think it's going to be probably an hour-ish. Okay. All right. So, so let's talk about the summary of the class. So we started the class talking about this paradigm of modeling inference and learning, right? So, so we started thinking about how there exists a real-world problem, you're gonna pick up a real-world problem and we're going to do modeling. So modeling would be an abstraction of that real-world problem. And in general, we are interested in reasoning about that real-world problem like finding the shortest path or solving so- some sort of optimization about that problem and we call that inference, right? So we had a model of the world and then we would do inference reasoning on that model. And the idea of the learning, the learning part of the lectures was that, well, our models are not gonna be perfect, right? You're not gonna be perfectly modeling everything in the world, instead, uh, we might have a partial model, and in addition to that, you might have some data around the world and around the things that are happening in the world. So we would like to use that data to, uh, to learn about the model and kind of complete our model. So, so this was kind of the common paradigm through the class. And, and this was, uh, the topics that we covered. We started with machine learning and we treated machine learning as a tool that allows us to, to better learn these models, parameters of these models. Uh, and then we, we talked about various levels of intelligence in the course, right? So, so we started with reflex based models. Then we increased the level of intelligence a little bit, talk about state-based models, variable-based models, and, and finally logic. So let me briefly, I'll just remind you of some of these, some of these topics. So in machine learning, kind of the common thing that we started looking at was, uh, this idea of loss, uh, minimization. So, so we have some data, we have some training data, inputs and outputs, x and y's. And then we, we define some sort of loss. We looked at different types of loss functions and properties of these loss functions. And then the idea was we would want to minimize this training loss for the hope of generalizing to a new scenario, right? For the hope of if I get some sort of new input, I would be able to give the best output possible, uh, with respect to that. So, so I would- in, in general I would like to minimize my tests- test error but one way to go about that is to minimize this training loss based on some set of variables that we have in our model. And, and the approach that we followed for that was using techniques like a stochastic gradient descent. So this would be the most common thing one would want to do. So we have this loss function. How do I go about optimizing it? I take the gradient and move in the negative direction of the gradient. So, so stochastic gradient descent was commonly used, uh, when we're doing loss minimization. So, so these two things like, like writing out the loss, minimizing it, and doing something similar to a stochastic gradient descent, this was kind of common across a lot of different machine learning algorithms that, that we used throughout the course and we apply that to a wide range of models, right? So, so we would like to apply this to all sorts of, uh, models that we had in reflex-based models or state-space models, it was kind of the same framework throughout. So and, and in the first, uh, set of lectures that we had, we started talking about reflex-based models. The simplest form of which was these linear models. If you remember regression, like we would have this linear class, er, regression classification. We had these linear models, linear classifiers, and we just wanted to learn the parameters of that. And, and a more complex version of that are things like neural networks or deep neural networks or even like nearest neighbors would be an example of these reflex-based models. And what was inference? Well, inference was pretty easy when, when it came to these things, right? Inference was just a feet-forward run of your model and that would give you the output. So we weren't doing that much hard work when, when it would come to inference. And in terms of learning, well, we looked at stochastic gradient descent, we looked at other things like alternating minimization as a way of learning these models. Okay. So, so that was reflex-based models. Then increasing the level of complexity, we started talking about state-based models. And, and the key idea that I want you guys to remember from, from the state-based models is, is the idea of a sta- like what is a state? A state is a summary of all past actions that is sufficient to choose the future actions optimally. And then we spent a good amount of time thinking about how to define a state like how, how to pick a good state and how to, how to do this, this modeling when it comes to the state-based models. And, and we looked at things like search problems or we have deterministic systems. We looked at things like MDPs when, when we're playing against nature, we have a little bit of a stochasticity. And then we looked at things like games where you're playing against some other intelligent agents. So there are some other intelligent agent that's coming in and playing against us. And, and in terms of inference, we looked at a couple of really cool algorithms, right? We, we talked about uniform cost search and A-star. We talked about dynamic programming, value iteration, minimax. So we covered a set of number of different, different ways of intelligently looking at these models and doing inference. And, and when it came to learning, we looked at structured Perceptron, Q-learning, TD learning, reinforcement learning in general. Like those were some of the learning algorithms that we applied when it came to state-based models, okay? So, so that was state-based models. Then, then we moved the level of complexity and intelligence a little bit higher and we looked at things like variable based models. And the idea of variable-based models was that the ordering of these states doesn't really matter. It's just the relationship between them that matters. And we define things like factor graphs. So if you remember the map coloring example, we define a factor graph around it and, and the idea was there's this graph that- and that graph structure here captures the conditional independence between different variables that we have. So different variables in this case was these different provinces and, and you would want to color them differently. And the relationship between them is going to be represented by a factor. The two types of models we discussed in this setting were constraint satisfaction problems and Bayesian networks. Uh, Bayesian networks but in the case where we have probabilistic relationships, uh, and, and we talked about inference, specifically backtracking, er, forward-backward, beam search, Gibbs sampling, as various ways of doing inference on these algorithms. And then when it came to learning, we looked at things like maximum likelihood and EM to, to try to do learning on these, on these types of systems, okay? And then finally, the last few lectures we've been talking about logic. So pushing the level of complexity just a little bit higher and, uh, thinking about formulas like actual like logical formulas that represent intelligent things about your system. So, so the key idea of logic is we're gonna have these powerful formulas that are going to represent powerful like meaningful things about your system. And we've talked about things like prepositional logic or first- and first order logic. And we talked about model checking which is commonly used in satisfiability. We talked about modus ponens, resolution as various types of inference algorithms that could be used when we have logic. And we didn't really talk about learning when it comes to logic. And I would say this is kinda like an open question still, like, how, how do you combine ideas from learning and ideas from logic and get the best of the both worlds? So there are ways of combining them, but how do you ensure that you're getting the best of both worlds like from data-driven ways of looking at things and a model-based look- way of looking at things? So, so what did CS221 gave us really? The CS221 is this, is this class where it gives us a set of tools to look at the world and, and think about difficult problems in the world and pick the right models. Pick, pick the right, like way of formulating that problem and the right inference algorithm to go about solving them. So, so I would say it's pretty much like we've covered a lot of material. So, so we covered, uh, breadth like pretty broad set of, set of topics. And the idea of it is to know that you have all these tools and you can pull out these tools and you can go deep into any of them. But, but the, but the goal of CS221 was to just give a broad view of what is artificial intelligence and what are some of these tools that, that we would have. So- but if you're interested in going a little bit deeper in any of these topics that, that we've discussed in the class, there are a good number of classes that, that you can take. And I want to just briefly mention some of them, an overview of some of these courses. So I would categorize the classes- the next classes that you can take into, into two main categories. You can take foundational classes that you go deeper in some of the foundational things you are talking about or you can take application classes where you go deeper in like the specific applications like natural language processing, vision, robotics. The specific applications that we kind of briefly covered in this class but we didn't go that deep in. So, so if you're interested in foundational classes, some of these other AI-based classes or things like Probabilistic Graphical Models, CS228. If your interested in Machine Learning, there's 229 and 229T. And, uh, there is the Deep Learning class. If you're more interested in the optimization side of things, there is Convex Optimization, Decision Making Under Uncertainty. And also if you're interested in logic side of things then, uh, there's this Logic and AI course. And also there's the, there's the Big Data class too that if, if you're interested in that. So I'm gonna go a little bit deeper in some of these courses but this is just more of an overview of some of the foundations, uh, if, uh, next courses that you might be interested in taking. And all of these are also posted on the AI website. So ai.stanford.edu/courses. So, so that's foundations. But if you're interested more on the application side of things, there's a good number of courses around Natural Language Processing. Um, I'm gonna go again a little bit deeper in some of these courses. And then there's a good number of courses around vision, good number of courses around robotics. There's also this other course around General Game Playing, which would be fun to take if you're interested in that side of things. All right. So, so let me just briefly mention like one slide on some of these courses that I think would be good courses to take after this class. So, so one of these is CS228. So CS228 is a probabilistic graphical models course. If you remember the variable, variable-based models part of, part of the lecture, um, this would be kind of a next course that goes deeper in that. So, so we talked about algorithms like forward-backward, variable elimination. But, uh, if, if you take 228, you'll be talking about more general type algorithms like belief propagation, variational inference, Markov Chain Monte Carlo, and so on. And another thing that 228 is going to cover is invariable based models like the way we treated things was, the model was given, the structure was given to us, right? We would say, well this is an HMM, and given that it's an HMM I'm gonna do these extra things on it. But in 228, you'd actually be thinking about learning the structure, learning the right structure to put in and, and how you think about these different variables and the relationships between them. So if you want to go deeper in that, that would be the course to take. Another interesting course to take and some of you might have already taken it is the Machine Learning course. So in this class, the way we treated machine learning was just as a tool, right? Like we, we had a few lectures on it and we just learned about machine learning just enough for us to, to do some of the things we wanna do in this AI course, but, but it's definitely broader than what we have discussed in the class. And some of the ways that it is, it is broader and more general than what we have discussed in the class is, first off in this class we talked about discrete, actions, and undiscrete, undiscrete time, discrete action and state type systems. 229 is going to cover a more broader set of, set of, set of models where you're actually thinking about continuity a little bit. We talked about linear models. 221 we'll talk about Kernel methods, decision trees, uh, boosting, bagging, feature selection, like all these sorts of different types of algorithms and models that are go, go- that are gonna go beyond what we have discussed in this class. Uh, we talked about k-means. Then we're going to talk about more broader set of clustering algorithms, like mixture of Gaussians, PCA, ICA, all these sorts of things. So a really useful good class to take if you, if you want to just learn more on the machine learning side of things, more from practical perspective. If you're more interested in a theoretical proce- eh, theoretical side of machine learning, there is this other course called to 229T. So this is Statistical Learning Theory, and this is going to actually think about the mathematical principles behind learning. So, so it doesn't necessarily cover the particular algorithm but it, it- it's going to cover, uh, like properties- mathematical properties around that algorithms. Say things like uniform convergence. Let's say you have a predictor and you want to make sure that that predictor with high probability is going to have some bounded error. How are you going to bound the error of that? How are you going to bound the test error with respect to the training error and some properties of, of your predictor. Or, or how would you formalize things like regret, uh, of your learning algorithm. So, so thinking about, uh, complexity, thinking about putting bounce, convergence, regret, these are going to be the topics that will be discussed in, in the Statistical Learning Theory, CS229T. So if you're more theoretically minded, I think this would be a good course to take. So now thinking more on the application side of things. Um, so a couple of good applications of AI after this course are things around vision, NLP, and robotics. I would say those would be kind of, uh, three main applications that you might want to consider going deeper and if you're interested in any of these areas. If you're interested in vision, there are a good number of classes around vision. There are a lot of interesting tasks around vision, some of them are more solved and some of them are more researchy things around like object recognition, detection, segmentation, but also things like activity recognition. Like if you, if you have frames of- different frames of a video of, let's say a soccer game, how would you predict where the ball goes? Ho- how would you predict where the person goes in the next few frames? Like that's actually a pretty difficult problem. Like doing activity recognition from just frames of images. So, um, if you're interested in some of these problems from the vision perspective, um, I, I would recommend taking 231 type classes, okay? So robotics would be another interesting, uh, application to look at. So in robotics in general, we are interested in problems around manipulation and navigation and, and grasping. So, um, the main applications that you might think about are things around self-driving cars, medical robotics, assistive robotics. Um, and, and the interesting thing that robotics brings is that there is a physical system sitting there so you're putting your AI algorithm, the things you have developed in this class and some stuff beyond it on this physical system, physical robot. And you need to deal with things like continuity. You need to deal with things like uncertainty and, and you need to deal with physical models that could come from kinematics and control. So there are a lot of interesting robotics classes if you're, uh, so I think Intro to Robotics is offered next quarter with, Oussama Khatib is teaching it. But there's also a new robotic series course, uh, that just came out. So this is the Robot Autonomy 1 and 2. Um, so advertisement for myself, I'm teaching this next quarter with Marco Pavone and Jeanette Bohg. So this is the Robot Autonomy 2. Robot Autonomy 1 was already offered in the fall. And, and the idea of Robot Autonomy 1 is to just cover the dif- different layers of the robotic stack. And, and at the end of the day, they actually have this project, this really cool project where you have, you have a robot platform and you have a lighter on top of it and you want this robot to just move around in a fake city. So if you're interested in s- I think the project, the project presentations is not already done. So, so if you're interested you can show up to, to do a round and see how, how these robots are moving around. But they basically have this fake city where this robot just navigates around in this fake city and does autonomous driving. So you can see a picture of a bicycle in the back where the robot is to detect it and the [LAUGHTER] bicycle it's actually like moving so [LAUGHTER] the robot needs to detect it and do obstacle avoidance, do coordination and with, with other agents around it in this particular environment. So that- that's Robot Autonomy 1. In Robot Autonomy 2, which is, which is offered in winter, what we wanna do is you want to put a manipulator on top of the robot so we are looking at mobile manipulation where we actually have an arm and we have this arm pick-up objects and blocks and put them on top of each other and, and do interactions so, so the class is- two, two big chunks of the class out of four is focused on interaction with the world- with the physical world and interaction with other agents. So there are interesting multi-agent like game theoretic questions that could come up and you have, uh, multiple robots trying to interact with each other. So ideas from games could come up there like naturally. All right. So, uh, so that was robotics. And then another interesting application is natural language processing. Um, and natural language processing is particularly very interesting because if you think about it, the world is continuous, but the words that you're using are, are discrete and these discrete words have continuous meaning. So there is a lot of mismatch between the fact that we have discrete words in a continuous world and, and, and we need to use these words to describe, uh, the, this continuous world and, and there are very interesting questions and challenges that arise in NLP around like compositionality and, and grounding. And if you're interested in these types of tasks, I think there are a lot of again interesting tasks that are more solved versus less solved and more researchy around NLP. So if you're interested in any of them I would recommend taking classes like 224. So, so some of these tasks are around like machine translation, let's say text summarization, dialog, some of them are much harder, uh, like text summarization, dialog, those sort of things. So, um, and we had a couple of, uh, homeworks around this too. So, so if you're interested in going deeper in that, I do recommend taking NLP classes. So those are some of the foundation and more application-y courses that I would recommend taking. I want to briefly mention two other courses too that are, that are not necessarily directly in AI but they're in the neighboring fields that, that would be still interesting to look at. One is looking at cognitive science. So, uh, so cognitive science in general looks at how human minds work. And it's one of those fields that, that kind of grew together with AI, right? Like the cognitive science and AI, they kind of started together and they went their ways but they, they still tend to inform each other. And there's a group of cognitive scientists who are looking at computational cognitive science. And they use ideas like Bayesian modeling and probabilistic programs when they look at cognitive science. So there's this course, uh, PSYCH204 which is cross-listed as CS428. I think Noah Goodman teaches this usually. And, and I think th- this would be a great course to take if you're interested in the cognitive science side of things, uh, and you would have ideas and topics from, uh, other pe- other cognitive scientists like Josh Tenenbaum, and Tom Griffiths, and Noah who are working in this particular area of, uh, computational cognitive science. So, so that's one. But if you think about cognitive science as kind of the software here, neuroscience on the other hand is, is the hardware of, of the problem. So, so another neighboring field that you might be interested in looking in a little bit deeper is, is neuroscience. And if you think about neural networks, like back in the day when they first start, like when people first started looking at neural networks, they were kind of thinking of them as computational models of brain. But today's neural networks- modern neural networks aren't really like biologically plausible in any ways, right? So, so they're not really models of, of the brain but still they're, they're interesting, there are interesting connections and insights that could be used in neuroscience from the perspective of AI or from AI to neuroscience. So I do recommend taking kind of cross-section neuroscience courses with AI, I think Dan Yamins would be someone who works in this area if you're interested in looking deeper in courses around neuroscience. All right. So that was kind of a quick summary of what we have discussed so far in the class, what are some next courses you are interested in taking, think deeper around them if you're interested in learning more and just come, come chat with me, I'm, I'm around, like, if you want to talk about them. But for the rest of the class what I wanna do is, I want to just do a quick like history of AI again, and then after that let's just talk about what are some of the problems, like what is left, like we've talked about all these cool algorithms toolsets, but what is difficult, what is not solved? So, so I want to spend a little bit of time on that. So let's talk about the history of AI. All right. So birth of AI. So we talked about this during the overview lecture. This was the first lecture we co- came in and were like, okay, how did AI happen? So the birth of AI really like people refer- referred to it as this workshop that happened in 1956. This was a summer school in Dartmouth, um, and basically everyone famous in the field showed up to this workshop, including people like John McCarthy, Marvin Minsky, Claude Shannon, all of these people showed up to this- to the Summer School, and the reason for this summer school was to kind of understand the general principles of intelligence. So what they really wanted to do was they wanted to figure out all the features of intelligence, and if they could formalize that, they could go ahead and like simulate that and have simulate intelligence. That is what they really wanted to do. And the workshop was really useful because after that, these people went back their ways, and then started doing really cool stuff, and this was the first rise of AI, and, and we started seeing things like problem-solving type, type, type systems, things like Samuel's checkers program, which was able to kind of beat a strong amateur level pla- pla- player. We started seeing other types of problem-solving programs like theorem provers, and then people, like, started really using logic as a way of thinking about AI and thinking about problem-solving. So that was really exciting, because people, like, at a time were thinking they have just solved everything, right, like it was exciting, the logic was there, like people had all these cool programs, they were super excited about the, the potential that, that these systems can bring. Uh, here are some of the quotes from people around that time. Things like, machines will be capable within 20 years of doing any work a man can do. So this is what Herbert Simon said. Marvin Minsky said, within 10 years, the problems of artificial intelligence will be substantially solved. Claude Shannon said, I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines. [LAUGHTER] So none of these really happened, but one thing to notice is these are not random people on the street, like, these are like fathers of the field, like, these are people who, who were like in- in it, like, they were looking at, they had a lot of insight like in terms of what is ca- what we can do and what we can't do. And it's kind of interesting that even like them, like they had all this like overwhelming optimism, and this did not pan out, right. So there was a lot of optimism here, and, and we started getting really underwhelming results. So lots of optimism, government came in, government was like here's my money, take my money, go do stuff. And, uh, basically the, the problem that government was really excited about was machine translation, right? So they wanted to take Russian texts and translate that. And, and the outcome of that was something like this, so the translation is the vodka is good, but the meat is rotten, so that's not really, like, a good translation of that text. And people started feeling like these types of problem-solving algorithms are not going to do it. So, so on and at point, then the government cut funding, and this was the first winter of AI. So we had the rise of AI with problem-solving with that summer school, lots of excitement, and then it didn't really work when it came to machine translation, and then we had, like, the first winter of AI. One thing that I wanna kind of like point out is, is we are at a good place for AI right now, right, like I, I would say like AI right now is also pretty overhyped, right? And then I wanted to put this quote here. So this is from Roy Amara, who says, "We tend to overestimate the effect of a technology in a short run, but actually underestimate the effects of it in a long run." Like if you think about any system, any technology that we have developed, it's always like oh within two years it's going to solve everything, and it's not going to solve everything within two years, but if you look at what it has achieved in 20 years, it's actually achieved a lot of things, and we usually underestimate that. And I think it's the same thing with AI, like, we are going to think, well, we're gonna have autonomous cars tomorrow, or by 2020. Actually, autonomous car companies were saying we're going to have autonomous cars by 2020 when I first started working on that, that's next month [LAUGHTER], I'm don't think we're going to have autonomous cars by 2020, but we are going to see a lot of advances, like, we are, we are seeing a lot of improvements in terms of the algorithms and the systems that we are developing. So, so I think in general, we should be aware of that, and we should be, we should be smart about it. Like, like surely AI is overhyped, but what can we do to actually address some of these problems? And going back to this first era of AI, this problem-solving e- era, well, why didn't it work? Well, the reason it didn't work was we had limited computation, we had limited information. This is the thing that we actually, like, started this class off. He said, well, a lot of AI algorithms, they haven't changed that much, right? But the with thing that has changed over the years is we have lots and lots of computation, we have lots of lots of data, and that's the thing that has really made the bigger difference here. And that's kinda, like, one of the reasons that it didn't work. But even though, like, we had these problems, and we had this first winter of AI, there were lots of interesting contributions from that era. The LISP programming language came out around that time, garbage collection came out that time, time-sharing, like, really interesting foundational ideas of computer science emerged during this period. And, and also the key paradigm that you are using in this class, thinking about separating modeling and inference, that actually came out around the same era too. Like, like the fact that we shouldn't have this declarative model thing, and at the same time this procedural inference algorithm kind of separated out from each other and think of them as separate things, like, i- is, is a huge contribution that came out around that time. All right. So, so that was kind of rise up and down of first up and down of AI. The second rise of AI was around '70s and '80s. This was when the knowledge-based systems came out, the expert systems, and, and I would argue that the reason that we had the second rise was people, people started thinking about AI differently. Like, originally people wanted to build artificial intelligence because they were interested in intelligence. They were interested in understanding intelligence, that- that's kinda what the summer school was about. But at this point, people were not interested inteli- in intelligence, what they were interested in was just building really useful systems that can do things, like they didn't care about intelligence at this point. And then that's why we had this rise of expert systems. So we think about logic, and we think about using domain knowledge to, to have things like if, if-then-else type statements, like, if we have a premise, then we have some sort of conclusion, and, and building these experts, expert systems allows us to do a lot of cool tasks in a real world. So, like, we had, we had actually impact around this time on things like inferring molecular structures, like diagnosis- diagnosis of blood infections, things like converting customer orders into parts and specification. So actual applications in the world, people started taking each of them and thinking about the expert knowledge that you have in that particular application, and formulating that in these expert-based systems, and then putting an AI on top of it, that, that does actual, actual work. So, so that was really cool. So the contributions of, of this era is that, first off, we had real applications that actually impacted industry, and, and this domain knowledge, the idea of I'm going to pick the domain knowledge, and this knowledge is actually going to help me make exponential growth, was the thing that, that was really powerful at this time. But why didn't it work? It didn't work, right? This, this was the, the second rise, and we had another winter, the second winter of AI. So the second winter of AI came because there was a bunch of problems. One of the problems was knowledge in general, it's not deterministic, right? Like we have a lot of uncertainty when we think about knowledge, and these systems were not able to, like, encode uncertainty the way you want it to be. And in addition to that, there was a lot of information, right? Like, if you think about any of these expert-based systems that requires a lot of many- manual effort to write down these rules and all of these relationships between all these different subparts of the system. So an example of that is SHRDLU, so SHRDLU is one of the first natural language understanding programs, computer programs. Uh, this was written by Terry Winograd, who is at Stanford now. I think this was when he was at MIT. And, and he created this system, this computer program, where you have this block world environment in it, and, and you can actually have a person that interacts with this computer program, uh, and maybe the person says, pick up a big red block, and the computer says okay because the computer understands, like, the relationships between big and small, and red, and different colors, and where the blocks are placed, and what can be picked, and what cannot be picked, right? So, so this had a who- a whole bunch of relationships and, and rules around- around it, and, and you could actually converse something with this computer program, and that was really powerful. But even, even Terry himself, like a couple of years after, uh, had a statement talking about his worries about, how, how SHRDLU- programs like SHRDLU are not going to solve the problem, they're not going to go all the way. Like, like, he was saying, this is kind of a dead end in AI, and, and thinking about these complex interactions, there's just a lot of them, and, and it just doesn't seem feasible to write down all these rules that you would have between each one of your subparts with, with other par- sub-parts, and there is no easy footholds. So, so at this point people were thinking this complexity barrier is not gonna really allow these AI systems to do cool big things. So then we had the second winter of AI, and then finally, there is this third rise of AI that we are still on, and God knows where it's going to [LAUGHTER] come down again. Is, is this modern vie- modern view of AI that started around 1990s, and, and- I, I would argue that this, this modern AI, the reason that we had this, this new rise is due to two main things. One is the idea of using probability in AI, this is- this was not a thing that existed from early on, this is actually due to efforts of Judea Pearl, who, who was very adamant about using Bayesian networks in AI to model uncertainty. So, so finally people were able to, to use probability to bring ideas from probability to model uncertainty. Because if you remember like expert-based systems, they were super deterministic, like, we didn't really have a way of talking about uncertainty. But Judea Pearl, Pearl's idea was let's bring probability into this. Let's actually talk about uncertainty, and let's have our models and make predictions. And then the second reason is machine learning, right? So, so people started inventing support vector machines to tune parameters, and then from that point on, we started seeing the rise of neural networks and the fact that you have lots of lots of data now and that can actually help us build better models. So, so given that we have these two, two big new viewpoints in, in AI, we have started seeing all these new advances. And then one point that I just want to make at the end of this is, is that AI is really a melting pot of a bunch of ideas from different fields, right? Like, not all of these are from pure AI, right? Like if you think about Bayes rules, it comes from pro- probabilities, least squares come from astronomy, er, first-order logic, or from logic, maximum likelihood from statistics, we have ideas from neuroscience, econ, optimization, algorithmic- algorithms theory, control theory, like, like we can think about value iteration that came from Bowman, from the field of control theory. So it's really, like, if you think about artificial intelligence, it just brings all these different ideas from these different fields together to solve this AI problem, and in general, I think it's a good idea to be mindful of that, and to be open to that because, um, the new insights and ideas really come from, like, having this broader view of things, and kind of the boundaries you put between different fields are really superficial and don't really need to be there. All right. So, so that was the history of AI, right? Like rise of AI, downfall, rise, downfall. And then we're on this, this last rise now. So, so let's just think a little bit about what have we achieved, what are the cool things we've had in the past couple of years and then what can go wrong and what should be- what should we worry about in the meantime. Okay. All right. So, so I think I've argued enough that AI is everywhere, right, like, AI is being used in consumer service, in advertising, in healthcare, transportation, manufacturing, and, and AI is going to make decisions now, right, because it has shown all these advances and because of that, like, we are using AI these days to make decisions for us. To make decisions for our education, to make decisions for credit, employment, advertising, healthcare, all of these different applications. So, so if AI is making decisions then we should actually be really careful about how AI is making decisions. And the fact that we should, we should think about, like, all the possible things that could go wrong or could not go wrong and understand the system fully, uh, before making it make decisions for us. So, so what are some of the advances? So one of the huge advances we have seen in recent years is this Google Neural Machine Translation. So, so the idea is this was kind of a huge advancement when it comes to machine translation. The idea is you could have a bunch of different languages and you can have a way of translating let's say from Korean to English and English to Korean and you can have a lot of data on that, and that would be great. And then maybe you can have Japanese to English and English to Japanese and train on that. And that's a lot of data and that's all great. But then even, like, if, if you put all of that data in the same, like in a melting pot, then what you can do is you can actually go from Korean to, to Japanese without having any data that just goes directly from Korean to Japanese and that's kind of exciting, right, like, because you had, you had like no data for that and if you're putting all of these together then you can actually make a lot of advances in terms of language translation. So this came out around 2016, lots of excitement, language translation just became so much better after that. There are still problems though. Here is one of them. So here is the problem of bias, okay, so let's say that you're starting from a language, like Hungarian that doesn't have gender. And then you start from this language and then you go to English, a language that has gender. And, and this is a translation that you're gonna get. You're gonna get she's a nurse and he is a scientist and, and, and she- she's a baker, and then of course he's a CEO, right. So, so you're gonna get like all these, like, gendered, ah, proving us here that there's, there's no reason for- well, just, just by looking at it and assuming that the algorithm is neutral, there is no reason for it to pick up, like, the- these particular genders. But the reason that it's picking up these genders is this algorithm is trained on data, our data is biased. If our data is biased, the algorithm has learned to pick up, pick up patterns, so it's going to pick up this bias and sometimes even reinforce it. So, so we're going to see all sorts of these weird behaviors. I wouldn't say it's weird, it's biased behavior but we should actually be aware of this if you are building these types of systems. And even in addition to bias, bias I can explain it, you might get weird behaviors that are even harder to, to explain. So you might have a text that looks like this like, dog, dog, dog, dog, dog. And then that could be just, just translated to, to, like, something else that is kind of crazy. So, so, like, under- like understanding what goes on. Um, a lot of times with these kind of closed form black-box systems are, are a little challenging. And I think there's a lot of research around trying to better understand and give transparency to some of these systems and understand what goes on. So that was, that was language, right, that was translation. Another example is image classification. So image classification has just become so much better over the years. Around 2015, it just hit human performance. So we have image classifiers that are just much better than humans. And, and that is amazing, right, like, that is really exciting because, because perception is a difficult problem. If I can do image classification, then I can use these systems on real s- real world like systems, like my phone, or my autonomous car and that's really exciting, right, but there are again a lot of issues around this. One of the issues we actually discussed this in the first lecture is the idea of having adversarial examples, right, so I can have AlexNet, a system that does image classification. And an AlexNet is going to classify these images on the left perfectly fine. That's a school bus, that's a temple, like, it's gets going to classify them correctly. But then what I can do is I can add some sort of noise to them. And when you add this noise to this picture, you're gonna get this third picture. And that kind of looks like a school bus to me like I don't- I, I can't tell the difference between th- the first and third picture. But what's going to happen is that AlexNet is going to predict ostrich for all of the pictures on, on, on that side on the right. So, so that's not great, right, that having these adversarial examples it's not that great because the system is not really robust, right, your, your AI-based, uh, im- image classifier is not very robust when it comes to just adding the- these sort of adversaries. And then- and, and after this, this work came out basically, people started writing all sorts of papers about how to create adversarial examples and how to be robust towards that particular adversarial example and then breaking that again and creating more robustness and a lot of back and forth. One of my favorite papers actually around this area came out this year, so this is from, eh, Shamir and others. And what they have shown is for a specific type of neural network when you have ReLus, what they've shown is, um, what you can do is you can always make the system classify, uh, classify the number in this case as something also. So, so let me give a concrete example. So this is a MNIST dataset, I have numbers in it from 0, 1, 2, 3, 4, I have 10 numbers here, right, and, and what Shamir and others have shown is you can pick this seven and you have 10 classes so you, you need at most 11 pixels. So pick 11 pixels that they, they pick the 11 pixels carefully. So it's not any 11 pixels but pick 11 pixels and change it as much as their algorithm tell- tells you. And then the seven is going to be classified as zero. So you can make this seven be classified as any of these numbers, 0, 1, 2, 3, 4, 5, and 6, and 7, and 8, and 9 by just picking the right 11 pixels to modify and they tell you like how much you modify, which is, like, crazy because like gi- give me anything, I'll create this adversarial example for you to, to, to just mis-classify it as something else and you'd only need 11 pixels because there were 10 classes here. Um, so there are a bunch of assumptions that I haven't really discussed here, like one of the assumptions is the way you are modifying these pixels is unbounded in this picture. So the, the greens and reds are just very high and very low. So, so it is not actually between 0-255. It's, it's numbers greater than 255 and less than 0. But they've actually shown that if you're allowed to have more than 11 pixels, let's say you have 20 pixels, you could actually fit it between 0-255 to make it like a realistic, realistic figure. Anyway, so lots of work around this, ah, lots of exciting theory work and, and practical work thinking about adversarial examples when it comes to images. But, yeah, what are the implications of this? Why are we so scared of this? Because, well, these systems are going to run on, on our phones doing image recognition on our cars, doing recognition of other vehicles and they can easily be, like, they can easily be attacked. Um, a group at, uh, Berkeley, Dawn Song's group, what they did was they had these stop signs and they put stickers on stop signs. Again, the stickers are at the right place like the place they wanted it to be. But then the stop signs are, are now classified as like a speed limit sign, um, which is not what you want your autonomous car to detect. Or, or here's another example, another work where you have these pictures and you put thesefunny glasses on them. And when you put the glasses on to pictures, then they are classified as a celebrity's pictures. So, so you can easily attack these systems, not easily, but you can attack- systematically [LAUGHTER] attack these systems. Ah, and that can actually affect the security of your, your vehicle or your image recognition system. All right. Another example that's pretty challenging is, is around reading comprehension. So, so what is reading comprehension? So if you remember your SAT or a GRE like type of- type exams, you have a text, you have a lot of text and you have to read that text and you have to answer questions. So you'd have a question like this. Ah, so the number of new Huguenot colonists declined after what year? So this is the thing you've got to answer. So, so Google put out this system BERT, which is actually really amazing, it can, can do this, this reading comprehension. And, and BERT can answer this question perfectly, it's gonna say 1700 and then that's great. Um, but, but what people have shown is you could actually just add an extra sentence at the end of this text that has nothing to do with the rest of the text like, like, it something to do, it has the word year in it but it doesn't have to do anything with this particular question that's asked. And now BERT is going to respond 1675. So, so you can again easily trick these systems and the way that they are tricked is just not the same way that humans are tricked. And, and that is I guess weird to people. And that's kind of expected but, um, but that is something that we are dealing with these days. All right. So another example, so I'm basically gonna talk about a bunch of examples throughout the lecture and the rest of the lecture. So another example I wanted to briefly talk about is, this idea of optimizing for clicks. So, so is that a good thing? Is that a thing we should be doing? Right. So-so sometimes the objectives that we have, the reward function we are writing for our system. We know what it is. We wanna do machine translation. We know exactly what we want to do and it's very clear. But sometimes it's actually not clear what we should be optimizing. Right. Like Facebook let's say wants to make money, like should they optimize for clicks? Is that an ethical like rewards function to put in, and what could be some of the effects of optimizing for clicks? Let's say that I have a reinforcement learning algorithm. I'm making this up. Let's say I have a reinforcement learning algorithm that wants to optimize for clicks and, um, I have my own Facebook account and it's optimizing clicks from Dorsa, right. So this reinforcement learning algorithm what it can do is, it can learn that, well, maybe if I show outrageous articles to Dorsa, Dorsa is more likely to click on these outrageous articles and I'm gonna get more rewards because I'm optimizing for clicks. So that's all good, right. That's expected. But another thing that the reinforcement learning algorithm by itself can figure out, is that if I show outrageous articles to Dorsa, Dorsa is going to become more and more outrageous, and then I'm gonna get more clicks because then I'm going to show more articles, and it'll be great. And then, that's kind of amazing because these systems are not interacting in a closed loop world. They're interacting with other systems like humans, we're also changing, we're also adapting, and this system through this RL algorithm by itself could figure out how to change me to like more outrageous things. And then we would end up in a situation where we are right now with very bipolar views, right? Because- because you're optimizing for clicks. So- so it's quite interesting to think about, what are the objects we should be optimizing and what world are we dealing with? We're not always in a Pacman role where we can control everything, right? Usually these systems are running in a real society where there are people being affected by them and their responses are going to change. And there is, the changes in the responses is going to affect things even more. So- so it's interesting to think about these feedback loops. And speaking of humans, I think, another thing- another question that comes up usually when it comes to robotics, or when it comes to AI, is, well, what is it that humans want? Like in general, if I- even- even in the case of robotics it's a big problem. I have a robot arm and I want my robot arm to pick up- pick up this object. That's all I want, right. This is the thing that me as a human wants, right? I want a robot arm to pick up- the robot arm to pick up this mobile phone. So back in the day, this was called good engineering, right? Good engineering was good engineers would write down the correct reward function, the correct objective, and the robot arm would go and pick up the object and everything would be great. The problem is that doesn't always work, right? It's really hard to write the correct reward function and get the robot to do that. And because of that, people these days are more interested in trying to do things that are around imitation learning or things around preference based learning, where you just try to learn from how humans do it. Like how a human would do this as opposed to just a human sitting down and saying, well, this is the object that I want you to pick up-pick up the robot arm because- because the robot might end up doing very weird things. Like an example of that that commonly comes up is, this vacuum cleaner example. Let's say- let's say you have a vacuum cleaner. You have a robot vacuum cleaner that wants to clean your house. And your objective for the vacuum cleaner is to suck up dirt. That- that's all it needs to do, okay. So you write your objective. Everything is great. And one way that the vacuum cleaner could suck up dirt is it could just go to a place, suck up dirt, put it out, suck up dirt, put it out, suck up dirt, put it out, and just keep doing that, right. Obviously, you didn't want your vacuum cleaner. You don't- you don't want that vacuum cleaner because you didn't want your vacuum cleaner to do that, right? That wasn't the thing you were thinking. But the objective of go suck up dirt, could end up in that behavior. Another behavior it could end up with is you could have your vacuum cleaner and your vacuum cleaner by itself could just break its own like sensors, so now it doesn't sense dirt. Now you're good because there are no dirt around us because we can't see them. I'm gonna close my eyes so I can't see the dirt. So I'm not going to suck up anything. So- so all of these are things around reward hacking. Like if you- if you just write the reward function that you think the robot should optimize, it's not necessarily going to work. And thinking about what are some good objectives that you should optimize is actually a really difficult problem. And this is something that I'm very interested in in my group, we focus on that a lot. Actually, another work that has recently came out on this is this work by, this- this new book by Stuart J. Russell on- on Human Compatible. And- and basically, what Stuart is kind of arguing is, is the fact that there is a mismatch between what humans actually want, what is the reward function that's in their head, and- and what is it that the AI system or the robot thinks the human wants. And- and those two are not always the same thing and that could cause a lot of issues around it. So interesting book, take a look. All right, what else can go wrong? So, um, generating fake content. That was the thing that came out a couple years back. So- so you could create like videos that just- or images, uh, that- that look exactly like, uh, Obama in this case. And- and you can just put fake content on that. And- and that, that again raises an ethical question, right. Just because you can build it, should you build it or not. Like- like we can build that. We have the- we have the system to create ca- fake content. It sounds fun, but- but should we do it just because we can't do it. Another place that this question comes up, and- and I do encourage you guys in general to think about that in your future like when you can build something, but should you build it? And- and yeah. Another place this comes up is in autonomous weapons systems. So, um, having, like thinking about military and thinking about having autonomous weapons, right. Like we would have- we could pote- we can have autonomous weapons these days, right? We can have systems that automatically detect an enemy and- and automatically just- just do- like just do the job, right. Yeah, you- do the task. So, um, should we do it? Should we have autonomous weapons systems or does there need to be a person in the loop? And if- so- so just like thinking about it, like, let's say that, yeah, we do not- we never want to have autonomous weapons systems and we always want to have a person in the loop. Well, why? Like- like what is it about the person that we want to be in the loop? Like- like that kind of tells us that there is something about the person. Maybe it's empathy, maybe it is something about what- what people know, or what people have, that the-the autonomous system doesn't have yet. And just like understanding that, I think, by itself is a very interesting problem. And- and there's a whole debate around us like of- of aut- autonomous weapon systems, should we have them? If we don't have them, what if other countries have them? Like how do we go about it? Uh, should we put a moratorium on it, and- and lots of debates around these types of systems. So- so in general I do encourage you to think about some of these ethical aspects of building AI systems. All right, next up, fairness. So, um, so fairness is a big problem. [NOISE] I think a lot of you know this already, right. So- so we might have a classifier that prob- like on your majority of dataset, perfectly separates your majority of datasets, um, such as the- the picture in the left, and then you might have some data points from minority group. And- and the classifier just does exactly the opposite thing for the minority group. So- so if you- if you put all these datas together, then you're probably going to get data- a classifier that looks like the first one, and it's just not gonna work on the minority dataset. And- and that is kind of, uh, that's a big problem, especially when it comes to applications like let's say healthcare. Like you might have different populations and a drug might just act very differently in different populations. And the question is, how should we address these fairness questions? And one way to go about it is- is to think about our errors. So- so, uh, you might have two classifiers and both of them might give you 5% error. Uh, but one of them could give you 5% random error and the other one could give you 5% systematic error. And- and I think it's pretty important to think about if you're getting systematic error or random error and what type of error on what population are you getting an- and- and that could address some of these questions around fairness. There's a lot of work actually around fairness these days. There's a- there's a conference around it, uh, around fairness, accountability, and transparency. This is worked by Moritz Hardt. So if you're interested in this, uh, take a look at some of the- some of the work from Moritz group. Um, another example of fairness, I think, we did talk about this in the overview lecture, uh, is around this criminal risk assessment. So, um, so Northpointe is a company that put out the system called COMPAS. And what COMPAS does is- is it predicts if- if a criminal is going to re-offending or not. The risk of a criminal re-offending or not. And it's going to give a score of 1-10. So- so that's what the system does. And- and they put out this system, this system was actually being used. And what happened was ProPublica which is a non-profit, came out and did a study and ProPublica showed that given that an individual did not re-offend, African Americans were twice likely to be wrongly classified five or above, okay. So- so that just seemed not fair. So- so ProPublica put us- puts out this article being like, well, the system is not fair. Why are we using this? Like- like it doesn't satisfy this fairness criteria. And then Northpointe actually did further studies. Northpointe did further studies and they showed that, well, they said, no, our system is fair because we are looking at this definition of fairness. Our definition of fairness is that given a risk score of seven, 60% of whites reoffended and 60% of blacks reoffended, so we wanna make sure that we get the same percentage to be fair and- and that's our fairness- fairness property. We do satisfy that. And this kind of, uh, thes- these two fairness definitions, um, kinda made a group of, uh, researchers, um, from, actually, Stanford, Cornell, a bunch of different places to start thinking about definitions of fairness. And what they've actually shown is that these two definitions of fairness, um, they are not going to be satisfied at the same time. They're always going to go against each other. You can't have both of them at the same time. So- so then if that is the case, then what is the right definition of fairness that- that we should use? Right. If we can't have both of those at the same time, then- then how- how do we make sure that we can use this system, or should we ever use these systems again? So, um, lots of interesting questions about formalizing fairness. Omer Reingold, uh, here in the CS department, works a lot around ide- ideas of fairness from the algorithmic side of things. So if you're interested in that, you can take Omer's classes, learn- learn from- about that. And- and kind of going back to this idea of are algorithms neutral. Like when you talk to people who haven't taken necessarily algorithm classes or AI classes, they usually think, well, yeah right? Algorithm's gotta be neutral, like they're doing math, they gotta be neutral. But as you have seen already, they're not really neutral because by design we really want our algorithms to pick up patterns. That's what they're good at. They're good at picking up patterns and- and biases and all sorts of weird things that we see, uh, in our data. They- they're just in our data. There are patterns in our data, and these algorithms just will pick them up and- and even reinforce them at times. And- and that's why we see bias in our algorithms and all of these issues around fairness and- and security and all these other things, uh, in our data. And another problem that comes up is this feedback loop that I was talking about earlier, right. So- so if- if algorithms are picking up patterns, well, they're putting out, you know, those patterns, if they have biases, they're putting out those biases in a world where there are humans, and those humans are observing those biases and can get even more biased and give more biased data. And- and this could be like this negative feedback loop that could go forever. So again, we gotta be really careful about what we were putting out and- and what it is- like how it is affecting the bigger society. Next stop is privacy. I guess I have like a couple of more things around these and- and after that I'll- I'll wrap up. Um, another- another thing- another issue in general is- is privacy, right? So we're using a lot of data and in- in a lot of our algorithms, and- and in general, uh, some of them could be- could be sensitive data and we don't want to- we don't want to actually reveal that sensitive data. So- so be- so- so to address that- one way to address that is, instead of putting out the actual data, putting out the right statistics that gives us the right information. So for example, you might want to com- uh, compute the average statistics. And like if- if you're asking if someone has cancer or not, instead of getting the yes-no answer, you could just- you might just need the average statistics and that would just be enough for you. So- so- so in general when you're collecting data, you shou- you should- you could randomize your data or you could change your data so- so you can get the average statistics as one way of protecting privacy. Another way of protecting privacy is in general randomized responses. So- so you might have a question of, do you have a sibling? So- so that is a question you can ask a user. And the user might not wanna reveal exactly if they have a sibling or not. So- so one way of responding that, is the user could flip two coins. And then if both of them can come- come heads, then they can say answer yes-no randomly. Otherwise, they can answer yes-no truthfully. So- so based on the answer that you get from a particular user, you wouldn't be able to tell if that particular user is- is going to have- it has a sibling or not. But you could actually compute the- the true probability of that. Because now you have observed this probability, three-fourths of the time their- by true probability, they're telling you the truth, one-fourth of the time they- they're telling you randomly. So- so then you have this observed probability and then from that you can recover the true probability, and that is probably enough for like the type of data that you- you- you need to deal with. So- so randomized responses in general could be one way of going about some of these privacy issues. Um, another issue that comes up is- is causality. So, um, this a little bit and- and a variable based models right, so- so you might wanna look at the effects of something. Let's say you're- you want to look at the effects of a treatment on survival. And this is your data. So you have, for untreated patients, 80% of them survived, and for treated patients, 30% survived. This is your data. So the question is, does treatment help or not? How many of you think treatment helps? Treatment helps. Think carefully. [LAUGHTER] So- so the answer is actually- who knows? [LAUGHTER] Because, well, if you think about it, the- there sick people are probably more likely to take treatment, right? Like i- i- if sick- sick people are more likely to undergo treatment, then- then you can't really like takes this data like at it- at its face and- and- and say, well, treatment helped or didn't help. Beca- because your- your data actually there's this- this extra causality that you didn't really consider the fact that, well, those people who took treatment they were sick, so you have to actually consider that, how the sickness is it going to affect and the- the rate of survival or not? And then, finally the last- I think this was the last thing I want to briefly talk about is- is this idea of interpretability versus accuracy, right? So- so you've seen kind of this rise of neural networks in a lot of applications and most of them are not safety critical applications. We haven't really seen like things like neural networks and safety critical applications. I guess you've seen it in cars and we- and you've started seeing in autonomous cars. But let's say airplanes or- or like other types of safety critical systems, health care systems. And- and one question that always comes up is, should we use these systems in safety critical settings? Because as we're using them, we're gonna lose interpretability. So- so there's this work by Michael Cook in the first group, where, uh, they're basically looking at air traffic control and- and- and they're looking at the- the system that runs on aircraft. And then previously, it was basically a bunch of rules that the system need- needed to follow, but it was interpretable. Like- like they could actually interpret it and understand what it does. Uh, and- and the systems- aircraft systems would use that. But Michael has been working with this new system called ACAS and ACAS-Xu where, uh, they are basically trying to replace that with just, let's say, a POMDP, a partially observable Markov decision process that- that does the same job but it's not necessarily do- it doesn't have necessarily the same level of interpretability. But it's pretty accurate and you can prove that it's even accurate. It's not even a neural network, right? I- it is a thing that you can actually like enumerate. And- and the question is, what are we willing to put in on our safety critical systems? If you lose transparency, if you lose interpretability, are we still willing to like put these systems that we think theyr'e statistically more accurate. Um, and in general, how can we increase interpretability and transparency of- of some of these systems that we are building? Because that is useful when we come- we think about these systems. So- so AI is important, [NOISE] I think. I think I've convinced you guys that AI is important. And then, um, all these different governments also think that AI is important. In 2016, uh, the White House put out, uh, an article about some of the directions that we should invest money in, and a lot of them were just around AI. So making long-term investments in AI research, thinking about human AI collaboration, thinking about ethical, and legal, and societal implications of AI, safety and security of AI systems. So all of these things that we have been discussing so far are really challenging problems and then everyone's excited about them and everyone wants to put in- put in a lot of energy and time and money in it. And- and in this document, uh, well, this document said, big data analytics have the potential to eclipse long-standing civil rights protections in how personal information is used in all sorts of applications like, housing, credit, employment, and so on. And Americans relationships with data should expand not diminish their opportunities. And- and some of the things that we have discussed so far, right, like biases, fairness, safety, all of these issues are not necessarily satisfying this last sentence, right, like if- if you're building these- these systems, we should actually be really careful about some of these implications. And- and as I was saying earlier, like around this there is a new conference. Uh, there's this FAT ML conference around fairness, accountability, and transparency. And kind of the guidelines of- of- of this- this new community that- that's being built around AI, is that we gotta think about the fact that there's always a human that's behind these algorithms. So there's always a human ultimately responsible behind what is going to happen. And then you can't just say, well, the algorithm did it, right? Like in- in general, that's just like the wrong way of going is because there was a human designer, one of you guys, one of us, right, that's going to write these algorithms. And- and I do really want you guys to think about some of these principles as you go further in your- in your career and- and you think about building these sort of AI algorithms. And just to end on a more positive note. Um, there's enormous potential for actually positive impact fo- for AI systems and- and please just use it responsibly. With that, I wanna thank you all guys for this exciting quarter and please fill out the surveys, uh, on Access. Thanks. [APPLAUSE] |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Bayesian_Networks_3_Maximum_Likelihood_Stanford_CS221_AI_Autumn_2019.txt | Okay. Let's get started again. Um, so before I, uh, proceed to Bayesian networks, I'm gonna talk about a few announcements. So as you probably know, car is due tomorrow, um, p-progress is, uh, due Thursday, um, and then finally the, the kind of the big one, uh, is the exam which is next Tuesday. So everyone, if you're in the room or if you're watching from home, please remember to go to the exam. Um, um, the exam will cover all the material from the beginning of the class up until and including today's lecture. So all Bayesian networks, no logic. Um, it doesn't mean you shouldn't use logic on the exam. Uh, reviews are going to be in section, uh, this Thursday and Saturday. So you've got not one, but two review sessions. The first one we'll talk about, uh, reflex and state-based models and the second we'll talk about variable risk models. Um, at this point, all the alternative exams have been, uh, scheduled. Um, if you can't make the exam for some reason, then, uh, really please come talk to us right now. Um, and just a final note is that the exam is, uh, the problems are a little bit different than the homework problems. They require more kind of problem-solving, and the best way to prepare for the exam is to really go through the old exams and get a sense for what kind of, uh, uh, problems and questions you're gonna be asked. Question? Where is the Saturday session? Where is the Saturday session, does anyone know? We don't know yet. We don't know yet. It will be posted on Piazza. Any other questions about anything? I know this is, uh, a lot of stuff, um, but this is probably going to be the busiest week, and then you get Thanksgiving break. So that will be great. Okay. So let's jump in. So two lectures ago, we started introducing Bayesian networks. So Bayesian networks is a modeling paradigm where you define a set of variables, which capture the state of the world, and you specify dependencies depicted as directed edges between these variables, and given the graph structure, then you proceed to define distribution. So first, you define a co- local conditional distribution for every variable given the parents. So for example, H given C and A and I given A, and then you slam all these local conditional distributions, aka factors together, and it defines a glorious joint distribution over all the random variables, okay? And you think about the joint distribution as the source of all truth. It is like a probabilistic database, a guru or oracle that can be used to answer queries. So this- in the last lecture, we talked about algorithms for doing probabilistic inference where you're given a Bayesian network, which defines this glorious joint distribution, and you're asked a number of questions, and questions looked like this: Condition on some evidence, which is a subset of the variables that you have observed, what is the distribution over some other subset of the variables which you didn't observe? Then we look at a number of different algorithms in the last lecture. There is the forward-backward algorithm, which was useful for doing filtering and smoothing queries in HMMs, this was an exact algorithm, then we looked at particle filtering which happens, uh, to be useful in a case where your state space, the number of varia- uh, the values that a variable can take on, it can be very large. Um, and this is an approximate, but in practice it tends to be a good approximation. Um, and then finally, particle filtering which- sorry, and Gibbs sampling, which is this much more general, uh, framework for doing probabilistic inference in, um, arbitrary, uh, factor graphs, and this was again approximate, okay? So so far, we've, uh, bit off two pieces of this, uh, modeling inference learning triangle. We've talked about models, we've talked about inference algorithms, and finally today we're gonna talk about learning. And so the question in learning, as we've seen repeatedly throughout this, uh, course, is: Where do all the parameters come from? So so far, we have assumed that someone just hands you this, these local conditional distributions which, uh, have numbers filled out. But in the real world, where do you get these? And we're gonna consider a setting where all of these parameters are unknown and we have to figure out what they are from data, okay? So the roadmap, we're gonna start- first start with supervised learning, which is going to be the kind of the easiest, easiest case, and then we're gonna move to unsupervised learning where some of the, uh, variables are gonna be unobserved, okay? Any questions before we dive in? [NOISE] Okay. Let's do it. So the problem of learning is as follows. We're given training data and we want to turn this into parameters, okay? So for the specific instance of, uh, Bayesian networks, training data looks like an example is an assignment to all the variables, okay? This will become clear in an example. And the parameters are all those local conditional probabilities that we now assume we don't know, okay? So here's a question: Which is more computationally more expensive for Bayesian networks? Um, is it probabilistic inference given, uh, the parameters or learning the parameters given data? Um, so how many of you- just use your intuition, how many of you think it's the former, probabilistic inference is more expensive? Okay. One maybe. How about how many of you think learning is more expensive? Yeah. That's probably what you would think, right? Because learning seems like there's just more unknowns, um, and it can't- how can possibly it be, um, easier. It turns out that it's actually the opposite. Yeah. So good job. [LAUGHTER] Um, and, this will hopefully be a relief to many of you, um, because, uh, I know probabilistic inference gets a little bit, uh, quite intense sometimes and learning will actually be, um, maybe not quite a walk in the park, but, uh, it will be, um, uh, a brisk stroll in the park. Um, and then when we come back to unsupervised learning, it's gonna get hard again. So, um, at least in the fully supervised setting, it should be easy and intuitive. So what I'm gonna do now is going to build up to the general algorithm by a series of examples of increasingly complex Bayesian networks. So here's the world's simplest Bayesian network. It has one variable in it. [NOISE] Let's assume that variable represents the rating of a movie. So it takes on values 1 through 5. And to specify the local conditional distribution, you just have to specify the probability of 1, the probability of 2, the probability of 3, probability of 4, and probability of 5, okay? So these five numbers represents the parameters of the Bayesian network, okay? By fiddling these, I can get different distributions. Okay. So suppose someone hands you this training data. So it's fully observed, which means I observed- one time I observed the value of R to be 1, um, the second day I observed the value to be 3, third day I observed it to be 4, and so on, okay? So this is your training data, okay? So now the question is: How do you go from the training data here to the parameters, okay? Any ideas? Just use your gut, what does your gut say? Count the number of [NOISE] and then divide them. Yeah, count and then divide or normalize, okay? So this seems a very natural intuition. Um, later, I'll justify why this is a sensible thing to do. But for now, let's just use your gut and see how far we can get. Okay. So the intuition is that, um, the probability of some value of r should be proportional to the number of times it occurs in the training data. So in this particular example, I look at all the possible values of r 1 through 5 and I just see 1 shows up once, 2 shows up 0 times, 4 shows up 5 times, and so on. So these are the counts of the particular values of r, and then I simply normalize, okay? So normalization means adding the counts together, which gives you 10, and dividing by 10, which gives you actual probabilities, okay? It's pretty easy, huh? Yeah? Good. Okay. So let's go to two variables. So now you improve your model of, uh, ratings. So now you take into account the genre. So genre is a variable G which can take on two values, drama or comedy, and the rating is the same variable as before. And now let's draw the simple Bayesian network which has two variables, um, and the local conditional distributions are P of G and P of R given G, okay? So again, I'm gonna give you some training data which specif- each training example specifies a full assignment to all the variables. So in this case, it's d, 4, d- and this one it's d, 4 again, uh, this one is c, 1 and so on. And the parameters here are, uh, the local conditional distributions, which is P of G and P of R given G, okay? Okay. So let's proceed to do this, um, and here following our nose again. Uh, we're going to estimate each local conditional distribution separately, okay? So this is may not be obvious why separate- doing it separately is the right thing to do. But trust me, it is the right thing to do and it certainly is the easy thing to do. So let's, uh, just do that for now. Okay. So again, loo- if we look at P of G, so we have to look at only this data restricted to G and we see that d shows up three times, c shows up twice so I get these counts and I normalize, and that's my estimate of the probability of G, okay? And now let's look at the conditional distribution R given G. Again, I'm going to count up the number of times that these variables, uh, G and R show up. So d, 4 shows up twice; d, 5 shows up once and so on; count and normalize. Question? So if, if instead of R given G had- The probability R of r, would that cause any differences in the results, like, if the order [NOISE] were changing? Yeah. That's a good question. So the question is, what happens if we define our model in the opposite direction where we have P of R and P of, uh, uh, G given R? So then you would be estimating different probabilities. You would be estimating P of R, which contains 1 through 5, and then P of, uh, G given R, which would be, you know, a different table. Do you get the same results? Uh, so the question is, If you do inference, do you get the same results? Um, in this case, um, you will get the same results. In general, you're not. Uh, you're- depending on the, the model you define, you're actually gonna get a different, uh, distribution, um, which will lead to different inference results. Yeah. This happens when you have two variables and you couple them together and you do things like this way. You're effectively estimating from the space of all joint distributions G and R. But if you have, uh, conditional independence and you have, for example, HMM, the order, how you write the model definitely affects what inferences can return. Okay. All right. So, so far, so good. So now let's add another variable, okay? So, uh, so now we have a variable A, which represents whether a movie won an award or not. Um, we're going to draw this Bayesian network, um, and the joint distribution is P_G, P_A, P_R given A and G, okay? So this type of structure is called a V structure, uh, for obvious reasons, and this remember was the thing that was tricky. This is the thing that leads to explaining away and, um, all sorts of tricky things in Bayesian networks. It turns out that when you're doing learning in them, um, it's actually the- all other trickery goes away. Okay. So, um, let's just do, uh, the same thing as before. So we get this data. Each data point, full assignment to all the variables. So this is d, 0, 3 corresponds to G equals d and, um, A equals 0, and R equals 3, and the parameters are all again all the local conditional distributions. Um, and now I'm gonna count and normalize. So this part was the same as before, counts in, uh, the genres, d shows up three times, c shows up twice, normalize. Um, A is treated very much the same way. So, um, three movies have won no awards, two movies have won one award. So the probability of a movie winning award is 2 out of 5. And then finally, um, this is the local conditional distribution of R given G and A, and here we have to specify for all combinations of all the variables mentioned in that local conditional distribution. I'm gonna count the number of times that configuration shows up. Um, so d, 0, 1 shows up once right here; d, 0, 3 shows up once, right here and so on, okay? And now when you normalize, you just have to be careful that you're normalizing only over the variable, uh, that you're, uh, the local distribution is on; in this case, R. So for every possible unique setting of G and A, I have a different distribution, okay? So d, 0, that's a distribution over 1 and 3. And if I normalize, I get half and a half and each of these other ones are completely separate because they have different values of G and A. Okay. Any questions? All good? All right. So that wasn't too bad. Um, so now let's invert the V and look at this structure, where now we have genre, there's no award variable, but instead we have two people rating a movie and the Bayesian network looks like this. The genre and we have R_1 given G and R_2 given G. And for now, I'm going to assume that these are different, uh, local conditional distributions. So we'll have PR_1 and P of R_2. So notice that in this lecture, I'm being very explicit about the local conditional distribution. Here, I'm instead of just writing P of G, which can be ambiguous, I'm writing P of G to refer to the fact that this is the local to conditional distribution for variable G. Um, this one's the one for R_1, this one for R_2, and those are different. And you'll see later why this, uh, this matters. Okay. So for now, we're gonna go through the same motions. Hopefully, this should be, um, fairly, um, intuitive by now. You simply count, normalize, um, for the P of G and then for R_1, I'm just going to look at the first, uh, or the second element here which corresponds to R_1 and ignore R_2, right? So G R_1. So d, 4 shows up twice. So I have d, 4, uh, and d, 4 that shows up twice; d, 5 shows up once, that's the d, 5; c, 1 shows up once; and c, 5 shows up once, okay? And then normalize, get my distribution, and then same for R_2. Now ignoring R_1, I'm gonna look at how many times did G equals d and R_2 equals 3, okay? So you can think about each of these as a kind of a pattern that you sweep over the training set. So, um, you have d, 5 so that's a 1 here; and d, 4 that's a 1 here; and d, 3 that's a 1 here and so on and so forth, okay? And then you normalize, okay? How many of you are following this? Okay. Cool. All right. So now things- um, I'm gonna make things a little bit more interesting. [NOISE] So here I've defined different local conditional distributions for each rating variable R_1 and R_2, right? But in general, maybe I have R_3, and R_4, and R_5, and I have maybe a thousand people who are rating this movie. I don't really want to have a separate var- distribution for each person. So in this model, I'm going to define a single distribution over rating conditioned on genre called PR, and this is where subscripting with the actual identity of the local conditional distribution becomes useful and this allows us to distinguish PR from this case, which is PR_1 and PR_2, okay? Notice that the, the structure of the graph remains the same. So you can't tell from just looking at the Bayesian network. Um, you have to look at carefully at the parameterization. Okay. So if I just have one PR, what I'm going to do is, um, what do you think the right thing should to do with this if you're just following your nose? [inaudible]. I'm sorry? [inaudible]. Yeah, so count both of them, I think is what you're saying. So you combine them, right? So, um, so P of G is the same, and now I'm going- I only have one distribution I need to estimate, uh, here r given g. Um, and I'm gonna count the number of times in the data where I'm using- uh, I have a particular value of g and a particular value of r. Okay? And I'm going to look at both R_1 and R_2 now. Okay? So D_3 shows up once here. So that's, uh, R_2, D_4 shows up three times. So once here with R_1, once here with R_1, and once here with R_2, and so on. I'm not gonna go through all the details, and then you count and normalize. Um, another way you can see this is that if I take the counts from, um, um, these two tables, I'm just kind of merging them, adding them together, um, and then, um, normalizing. Question? [BACKGROUND]. If the Rs are IID, are they drawn from the same conclusion, or? Yeah, so is this assuming something about independence here? Um, so here when I am doing- I am assuming the data points are independent first of all, and moreover, I'm also assuming the conditional independent structure of the Bayesian network is true. So conditioned on g, R_1 and R_2 are independent. [BACKGROUND] Yes, so here I'm also assuming that R_1 given g has the same distribution as R_2 given g, and this is part of the-when I define the model that way I'm, you know, making that assumption. [NOISE] Okay, so this is a general idea which I want to call out, which is called, uh, parameter sharing. Um, and parameter sharing is when the local conditional distribution of different variables use the same parameters. So the way to think about, uh, parameter sharing is in terms of, uh, powering. So you have this Bayesian network, it's hanging out here. And behind the scenes, the parameters are kind of driving it, right? And you can think about the, the parameters as these little tables sitting out there, and you connect a table up to a variable if you say this table is going to power, uh, this node. Okay? So, uh, what parameter sharing in this particular example was saying, this distribution of G powers this node and these two variables, R_1 and R_2, are going to be powered by the same co-local conditional distribution. Okay? So, um, okay, I'm gonna go to two [NOISE] more examples. Maybe I'll draw them on the board to make this a little bit more clear. So remember the naive based model, uh, from, uh, two lectures ago. So this is a model that has a variable that represents- adapted to, uh, movie genre setting. We have a genre variable which takes on var-values comedy and drama, and then I have a movie review which represents a document with L words in it and each word is, um, drawn independently given, uh, y. So I have said the joint probability is therefore going to be probability of genre y times the probability of, uh, w_j given y for all, uh, words j from 1 to L. Okay? So, so the way to think about this, um, graph the Bayesian networks is, um, you have a variable Y here, and so I'm just gonna draw W_1, W_2, W_3, and, um, I'm going to have a local conditional distribution here which is y P genre of y. So that's some table that's powering this node, and then I have a separate one single other of variable, um, sorry, local conditional distribution of, uh, w, uh, y and w, and, um, probability of what I call it word, word, um, W given y, and this distribution powers, um, these variables. Okay? So notice that, um, here there's two, uh, local conditional distributions, we have genre and we have word, even though there are l plus 1, uh, variables in the Bayesian network. Yeah. [BACKGROUND] Yeah, so the input, what are- so the question is what is the input to, uh, P word W given y, uh, so when you apply this to a particular variable, um, in some sense you bind Y to Y and you bind W to a particular Wi at hand. [BACKGROUND] [inaudible]. Exactly. And you can see this kind of mathematically, where you hear we have, um, you pass into the P word, W j given y and j ranges from 1 to l. [BACKGROUND]. Okay. Um, just to kind of solidify understanding, uh, let me ask the following question. If, um, y can take on two values as in here and each word can take on, um, d values, how many parameters are there? In other words, how many numbers are in these, uh, two tables on the board? So shout out the answer [BACKGROUND] to the [inaudible] right. Okay, okay. So 2D plus 2, so there's two here, right, so there's C_B. So there's two numbers, and then there are for every C, there's d possible values, for d there're d possible values. So there's 2 plus, uh, so there's two here, and then there's, uh, d plus d here. Okay? Now if you really want to be fancy and count the number of parameters you really need, it should be less and less because if I specify the probability of C, then you can, uh, 1 minus that probability is probability of D, but let's not worry about that. [BACKGROUND] Okay, let's do the same thing for HMMs, um, just to make sure we're on the same page here. Since we all love HMMs, um, okay. So in HMM, remember there is a sequence of hidden variables H_1, H_2, H_3. Um, and for each hidden variable, we have E_1, um, E_2, and E_3 which are the observed variables. And, um, there are three types of, uh, distributions for HMMS. There is, um, the probability of, um, the- I'm gonna call it Pstart of h, which is gonna specify with the initial distribution over H_1. I'm gonna have the transition probabilities, which I'm gonna denote, um, h-, um, it's called h prime. Probability of h prime. Oh, I should write down Ptrans here. H prime given h. So this is going to be another distribution which powers each, uh, transition. Uh, each non-initial hidden variable, okay. Remember, the- these are pointing to variables, not edges or anything else. And finally, I'm going to have, um, a distribution, the emission distribution of e given h. Um, that table is going to power, um, each of the, uh, observed variables e, okay. And here, [NOISE] again, there are three types of distributions: start, transition, and emission, even though there could be any number of, uh, variables in a natural Bayesian network. Okay. And just to be very clear about this, when I apply this table to this node, H binds to, um, H_1 and H_2- h prime binds to H_2. And then when I apply it to this node, H_2 binds to h and h prime binds to H_3. And again, you can see this from, you know, formulas where I'm passing in to Ptrans, hi given hi minus 1 as i sweeps from 2 to, um, n. Okay, so you can think about this as like a little function with local variables, the argument to that, um, that function r, h, and h prime. But when I actually called this function so to speak, when defining the joint distribution, I'm passing in the actual variables of the Bayesian network. Okay, any questions about this? Okay, maybe just to summarize some concepts first. Okay. Um, so we talked about learning as the generically the problem of how we go from data to parameters. And data in this case is full assignments to all the random variables. In HMM case, it's, a data point is, an assignment to all the hidden variables and the observed variables. And parameters usually denoted Theta is all the local conditional distributions, which are these, uh, three tables in the case of HMM. Okay. Um, the key, um, intuition is count and normalize, um, which is intuitive and later I'll justify why this is an appropriate way to do parameter estimation. Um, and then finally parameter sharing is this idea which allows you to define huge, uh, Bayesian networks but, um, not have to blow up the number of parameters because you can share the parameters between the different variables, okay. So now, let's talk about the general case. Um, so hopefully, you kind of understand the basic intuition. So this is just going to be, um, some abstract notation that's gonna kind of sum it up, um. So in a general Bayesian network, we have variables x_1 through xn. And we're gonna say that parameters are a collection of distributions. So Theta equals P_d, okay. And so big D is going to be, um, the set of types of distributions. So in the HMM case, it's going to be three types: start, trans, and emit. So basically, that's the number of the- these little boxes that I have on the board there. Um, and the joint distribution when you define a Bayesian network is going to be the product of P of X_i given x parents of i. So this is the same as we had before. But now notice the crucial difference which I've out- outlined in red here is that I'm subscripting this p with, um, a d_i, okay. So what d_i for the i'th variable says is which of these distributions is powering that variable, okay? So d of, uh, this variable is emit, d of this variable is, uh, transition, and d of this variable is start, okay. So this looks maybe a little bit abstract, um, um, notationally, but the idea is just to multiply in the probability is, uh, that, uh, was used to generate that variable or power that variable. Okay. And parameter sharing just means that the d_i could be the same for multiple i's. Yep? In a Markov model case where the emission probabilities are all the same for all variables. Like why do we need multiple emission distributions, like, wouldn't be the same as just drawing a emission distribution different? Yes, so the question is if we only have one emission, uh, distribution, why do we need so many of these replica copies? Um, and the reason is that, these variables represent, um, the objects' locations at a particular time. So the value of this is gonna be different based on what time step you're at. But the mechanism for generating that variable is the same. Just like if I flipped, a coin, you know, 10 times, I only have one distribution that represents the probability of heads, but I have 10 realizations of that random variable. Um, another analogy that might be helpful is think of probability of these as, like, a parameter- of, like, functions in your program that you can call, right. This is like a sorting function. And sorting is just used in a whole bunch of different places, but it's kind of the same- kind of local function that, uh, powers a bunch of, uh, different, um, use cases which are specific to the context. Yeah. Okay, so in this general notation, what does learning look like? So the training data is, um, a set of full assignments and I want to output the parameters. So here's the, the basic form, it's count and normalize. So in counting, um, there's just a bunch of for loops. So for each, um, training example, which is x is a full assignment, I'm gonna look at every, um, variable. I'm going to just increment a counter which, uh, of the di'th, um, uh, distribution of this particular configuration, uh, x parents i and xi, okay. And then in the normalization step, I'm going to consider all the different types of, um, distributions. And then I'm gonna consider all the possible local assignments to the parents, and I'm going to normalize, um, that distribution. Yeah? [inaudible] of d_i the right table we're looking, correct? Yeah, so the d_i, uh, refers for the i'th variable, which red table I'm looking at. [NOISE]. Okay. So, um, I've given you already a bunch of examples. So hopefully this, um, the notation might be a little bit abstruse, but hopefully you guys already have the, you know, intuition. Um, any questions? We're moving on. The main point of this slide is just to say that this is actually very general and I just didn't do it for hidden Markov models and naive Bayesian doesn't work for anything else. Okay. So now let me come back to the question of, you know, why does count and normalize make sense, right? So count and normalize is just like, you know, some made up procedure. So why is it a reasonable thing to do? And it turns out this is actually based on very firm kind of foundational principles, this idea of maximum likelihood, which is an idea from statistics, um, that says if I see some data and I have, uh, a model over that data, I want to tweak the parameters to make the probability of that data as high as possible, okay? Uh, so in symbols, what this looks like is I want to find the parameters Theta. So remember parameters are probabilities in the, uh, red tables on the board, um, and then I'm going look at- make the product of all over all the training examples the probability of that assignment. So for every possible setting of the parameters, I can, that assigns particular probabilities to, um, my training examples and I want to make that number as high as possible, okay? And the point is that the algorithm on the previous slide exactly computes this maximum likelihood through, uh, parameters in closed form, so which is really nice. If you think about when we talk about machine learning, we define a loss function and you cannot compute anything in closed form except for maybe linear regression, and you have to use stochastic gradient descent to optimize it. Here, it's actually much simpler because of the way that the model is setup. You just count and normalize, and that is actually, uh, the optimal answer. So just because you write down a max, it doesn't always mean you have to use gradient descent. Is the, is the lesson here. Um, okay. So let me- I'm not gonna prove this in generality, but I want to give you some intuition why, uh, this is true and hopefully connect the, the kind of the, the abstract principle with, uh, on the ground, um, algorithm that you- of count and normalize. So suppose we have, um, uh, two variable Bayesian network with genre and rating. So I have three data points here and I have this maximum likelihood principle, which I'm, you know, going to follow and let's do some algebra here. So I'm gonna expand the joint distribution, and remember joint distribution is just the product of all the local conditional distributions, um, and I've also expanded this, you know, the product over these three instances. So I have the probability of d, probably of 4 given d, um, probability of d here, probably of 5 given d, um, probably of c, and probably of 5 given c, okay? And what I'm maximizing over here right now is a distribution over genre and I have a distribution of rating conditioned on genres c and the distribution of rating conditions genre equals d. Okay. So, um, I've color-coded these in a certain way to emphasize, um, that the- all the red touches is, is the local conditional distribution correspond to genre, the blue is corresponds to, um, probability of rating given genre equals d and green is probability of rating given genre equals c, okay? So now I can shuffle things around, okay? And I notice, um, that these factors don't actually depend on probability of g at all. So they can just hang out over here. And I've essent- and the- and this likewise, um, if I'm thinking about the maximum over, uh, you know, argument c, these other factors are just constants. So they don't really matter either. So I've basically reduced this as a problem of three independent maximization problems, okay? And this is why I could take each local conditional distribution in turn and do count and normalize on each one separately. Um, okay. And then the rest is actually, um, you know, to do- to do do it actually properly, you have to use Lagrange multipliers, um, to, to solve it. But, um, intuitively, uh, hopefully you can either do that or just believe that if I ask you, um, what is the probability, best way to set probabilities so that this quantity is maximized, I'm going to set, um, probability of d equals two-thirds and probability of c equals one-third. [NOISE] This is similar to one of the questions on, uh, the foundations homework if you, uh, remember, but only for probability of, um, uh, a coin flip essentially. Okay. So hopefully some of you can now, uh, rest, uh, you know, sleep at night thinking that if you do count and normalize, you're actually obeying some, um, high-minded principle of maximum likelihood. Okay. Um, so let's talk about Laplace smoothing. So here's a scenario. If I have given you a coin and I said I don't know what's the probability of heads, but you flip it 100 times and 23 times it's heads and 77 times it's tails. What is the maximum likelihood, uh, estimate going to be? Yeah. So it's going to be probability of heads is, you know, count and normalize, uh, it's 23 over 100, which is 0.23, probability of tails is 0.77, okay? So it seems reasonable, right? Um, so what about this? So you flip a coin once and you get heads, what's the probability of heads? So the maximum likelihood says 1 and 0. So, you know, some of you are probably thinking, you know, smiling and you're probably thinking, "This is a very, uh, closed-minded thing to do, right?" Just because you saw one heads, it's like, "Oh, okay. The probability of heads must be 1. Tails is impossible because I never saw it." So it seems pretty, um, you know, foolish, um, and intuitively you feel like, "Well, tails might happen, you know, sometimes. Um, so it shouldn't be as stark as 1, 0." Okay. And this is an example of overfitting, which we talked about in the machine learning lecture, where maximum likelihood will tend to just maximize the probability, and in here it does maximize the probability because the probability of data is now 1 and you can't do better than that. But this is definitely overfitting. So we want, uh, a more reasonable estimate. So this is where Laplace smoothing comes in, and again I'm gonna introduce Laplace smoothing, uh, from kind of a follow your nose kind of framework and then, um, I'll talk about why it might be a good idea. Okay. So here's maximum likelihood, um, just the number of times heads occurs over the total number of trials you have. So Laplace smoothing is, um, just adding some numbers. Um, so, uh, La- this is Laplace named after the famous French mathematician who, um, did a lot more than add numbers, um, like the Laplace transform and Laplace distribution. But we're only going to use or talk about his, um, adding numbers, um, invention I guess. Um, so, so here in red, I'm shown that for this probab- estimate, no matter what the data is, I'm just gonna add a 1 here. I don't care what the data looks. I'm just gonna add a 1 and we're gonna divide by the total number of values, uh, which are possible, which is 2, heads or tails. And for tails, I'm also gonna add a 1, I don't care what the data says, um, and I'm gonna divide by 2, okay? So now I get two-thirds and one-third, which should be a more, you know, sensib- intuitively sensible estimate if you're gonna come up with any sort of estimate. It says, "Well, I saw heads, so probably more than 50% is gonna be heads. But, um, it's probably not, you know, 100%." Um, okay. So let's look at it in a slightly, uh, more complicated setting. So here I have two variables and, um, Laplace smoothing is driven by a parameter Lambda, um, which by default is going to be 1, but it can be any number, um, uh, non-negative number. Um, and what Laplace smoothing amounts to is saying start out with your tables and instead of filling them up with 0, fill them in with Lambda, okay? So here's my table. Before I even look at the data, I'm just gonna put 1s down. And then when I look at my data, I'm just going to add to that counter. So d shows up twice, c shows up once, and then count and normalize, okay? Same over here. Before I look at any data, I'm just gonna populate with ones, which is Lambda, and then I get my three data points and I add the three counters, uh, which are shown here, and I count and normalize. So by construction, no, uh, events should have probability of 0 unless Lambda is 0. Because I already start with 1, and 1 divided by some positive number is not 0, okay? So the general idea is, uh, for every distribution and partial assignment, um, I'm going to add Lambda to that count, um, you know, and then I'm just going to normalize, uh, to get the probability estimates, okay? So an interpretation of Laplace smoothing is you're essentially, you know, um, pretending that you saw Lambda occurrences of every local assignment even though you didn't see in your data. You're hallucinating this, this data, sometimes it's called pseudocounts or virtual counts. Um, and the more, uh, higher Lambda is, the more hallucination or more smoothing you're doing, and that will draw the probabilities closer to the uniform, you know, distribution and the other extreme Lambda equals 0 is simply maximum likelihood. But in the end, you know, data, uh, wins out. So if you had, um, Laplace smoothing and you saw one head and, uh, now you're saying estimated the probability of two-thirds. But suppose you keep on flipping this coin and it keeps on coming up heads 998 times, then by doing the same Laplace smoothing, you're going to eventually update your probability to 0.999. You're never gonna reach 1 exactly because no probabilities are going to be ever 0 or 1. But increasingly, you're gonna be much more confident that this is a very, very rich coin. Okay. Any questions about Laplace smoothing? [NOISE] So Laplace smoothing is [NOISE] just, um, add Lambda to everything essentially. And oh, so, so I forgot the principle behind Laplace smoothing is if you think in terms of, um, this is, kind of, beyond the scope of this course but you can think about a prior distribution over your parameters, um, which is, um, some uniform distribution. And, um, instead of doing maximum likelihood, you're doing, um, map or a mac- maximum a posteriori estimation. So there is another principle but you don't have to worry about it for this class. Yeah? [inaudible] make Lambda? The question is how big you make Lambda? Um, it's a good question so one is- Lambda should be small. It probably shouldn't be like 100. Um, probably should be more like 1 or even- you can make it less than 1, maybe it should be like 0.01 or something. It depends on, um, you know, how, I guess, uncertain, you know, you are. So in general, that these priors are meant to capture what you know about the, um, the, kind of, the distribution at hand. Okay. All right. So now we get to the fun part, this is going to- in some sense, combined learning which we'd done with probabilistic inference, uh, which we did before. And the motivation here is that, what happens if you don't observe some of the variables? So far, learning has assumed that we observe complete assignments to all the random variables, but in practice, this is just not true, right? The whole reason people call them Hidden Markov models is that you don't actually observe the, the hidden states. So what happens if we only get the observations? Or the simpler example, what happens if the data looks like this where we observe the ratings but we don't observe the genres? So what can you do? So obviously, the counter normalized thing doesn't work anymore because you can't count with question marks. So, um, there is, again, two ways to think about what to do here. The high-minded principle is appeal to maximum likelihood and make it work for, uh, unobserved variables. Um, and the other way to think about it which I'll come to later is simply guess what the latent variables are and then do counter normalize. Okay? And those- these two are gonna be equivalent. So let's be high-minded for now and think about, um, what maximum marginal likelihood. Okay? So in general, we're going to think about H as the hidden variables and E as observed variables. In this case, G is hidden, and R1 and R2 are observed. And suppose I see some evidence E, so I see R_1 equals 1 and R_2 equals 2 but I don't see the value of G. And again, the parameters are all the same, the same as before. I just have less data or information to estimate the parameters. Okay. So if you're following maximum likelihood, what does maximum likelihood say? It says tweak the parameters so that the likelihood of your data is as large as possible. So what is the likelihood of the data here? It's simply instead of H and E, I have probability of E. Okay? So this seems like a, kind of, a very natural sound extension of maximum likelihood, um, and it's called, um, maximum marginal likelihood. [NOISE] Um, because, um, this [NOISE] quantity is a marginal likelihood, right? It's not a joint likelihood or a joint distribution, it's a marginal distribution over only a subset of the variables. Okay. So now, to unpack this a bit, um, [NOISE] what is this marginal distribution? It's actually, uh, by the axioms of probability, it's the summation over all possible values of the hidden variables of the joint distribution. So in other words, you can think about maximum marginal likelihood as saying I wanna change the parameters in such a way that such that, um, the probability of, um, what I see as high as possible, but what that really requires me to do is think about all the possible values that, um, H could take on. I don't see it, so I have to consider what if's, what if it were C? What if it were D and so on? Okay? So in other words, fundamentally, if you don't observe a variable, you have to consider possible values on it. All right. So now, let's, uh, skip to the other side of the, kind of, scrappy way and think about what is a reasonable algorithm that makes sense. And I'm not gonna have time, uh, in this course to, kind of, show the formal connection, but if you take, you know, CS 228 or a graphical models class, um, you'll go into this in much more detail. Um, so the intuition here is, um, the same as what we had for k-means. So remember in k-means, you tried to do clustering, you don't know the, the centroids, and you also don't know the assignments. So it's a chicken and egg problem. So you go back and forth between, uh, figuring out what the centroids are and figuring out what the, the assignments are. And the centroids are going to be an analog of parameters here, and the assignments are going to be the analog of the hidden variables. Okay? So here's, here's the algorithm. Um, it's called Expectation-Maximization. It was, you know, formalized, um, in its generality in the- in the 70's, and you start out with some parameters, um, maybe initializing to uniform or uniform plus a little bit of, uh, noise. And then it's gonna alter- we're gonna alternate between two steps: the E-step and the M-step. Um, and if you- it's useful to think about the k-means in your head while you're going through this algorithm. So, um, in the E-step, we're going to guess what the hidden variables are or what the values of the hidden variables take on. So we're going to define this q of h which is going to be, uh, or represent our guess. And since we're in probability land, we're going to consider the distribution over, uh, possible values of h. And this guess is going to be given by our current setting of parameters, and the, the evidence that we've observed in our fixing. So we're asking the question what is the probability of the hidden variables taking on particular values of h, given my data and given my parameters? So this should- this should look, kind of, familiar to you, right? What is this? This is just probabilistic inference, right? Which we've been doing for, um, the last lecture. Which means that, you can use any probabilistic inference algorithm here, you can do forward backward, you can do Gibbs sampling, um, and that's, kind of, some module that you need to do, um, EM. Okay? Okay so now what happens if we have our, uh, setting of- or guess over the possible values H? Now, we, um, make up data. Um, so we create weighted points, um, of- with particular values of H and E, and, um, each of them gets some weight, uh, q of h. Um, and then finally, once we're given these weighted points, now we're just going to use, uh, do counter normalize and do maximum likelihood. [NOISE] Okay? So I'm gonna walk through an example, um, to make this a little bit more grounded out. Um, so I'm gonna do this on a board because it's [NOISE] gonna get a little bit, um, maybe a little bit hairy. Um, okay. Let's, let's do this. So here, we have a three variable, uh, Bayesian network. So we have- um, let's draw it over here. Actually, I might need a space. So we have G, we have R_1 and [NOISE] R_2. Okay? And our data that we see is, ah, we get [NOISE] data which is, um, 2, 2 and 1, 2. Okay? So I observed, um, 2, 2, that's one data point and, uh, 1, 2, that's another data point. Okay? Um, so initially, what I'm going to do is, um, start with some setting of parameters. Okay? So I'm going to start with my parameters Theta, [NOISE] which specify a distribution over, uh, g. And for, [NOISE] um, lack of information [NOISE] I'm going to consider [NOISE] just half and half. Let me write 0.5 to be consistent. [NOISE] So I'm initializing it with a uniform distribution, and my other table here is going to be, um, g and r. So [NOISE] probability of, um, [NOISE] uh, r given g. [NOISE] And here, I have c, [NOISE] uh, the possible values of g and the possible [NOISE] values of r which for simplicity I'm gonna assume to just be 1 and 2. Um, and then for this, I have- [NOISE] for these numbers, 0.4, 0.6, um, [NOISE] and 0.6, 0.4. So I'm- [NOISE] I'm not setting them to uniform, um, but adding a little bit of noise. So hopefully, people can check that, um, the numbers on the board are the same as the numbers on the slides. Okay. So- [NOISE] so that's my initial parameter setting. So now, in the E step, [NOISE] I'm gonna use these parameters and to guess what g is. [NOISE] Okay? So what does this look like? Um, so [NOISE] for [NOISE] each possible datapoint. So for every example, so I have 2, 2, that's one datapoint. I'm gonna try to guess what the value of g is. So it could be c, [NOISE] um, it could be d. And for the other datapoint: 1, 2, [NOISE] it also could be c or it could be d. Okay? [NOISE] And now, I'm gonna try to, um, compute a weight for each of these datapoints [NOISE] based on the model. Okay? So the weight is- not now, look- look at this. This is cool, right? Because I have a fu- a complete assignment and you know what to do with complete assignments. I can just evaluate the probability of that assignment. Okay? So now, um, ju- just to make things concrete, I have probability of g times [NOISE] probability of r_1 given g [NOISE] probability of r_2 given g. [NOISE] Um, this is by definition the probability of g [NOISE], um, 2 equals, uh [NOISE]. Sorry, that's a little bit messy but- [NOISE] So this is joint distribution by definition of this Bayesian network, it's this. Um, okay? So how do I compute the probability of this configuration? Well, it's a probability of g equals c. So I look at this table. That can be a 0.5 [NOISE] times the probability of r_1 given g, that's, uh, 2 and c. So if we look over here, that's, ah, 0.6. [NOISE] And then another r_2 given g, that's also a 2 and a c. So [NOISE] that's, ah, 0.6 because of parameter sharing. [NOISE] And that's, ah, 0.18, right? [NOISE] Um, let me give you guys a slide so you guys can check my work. Um, Okay. So now, I look at this table. So probability of g equals d is 0.5. [NOISE] Um, and then the probability of r_1 equals 2 given g equals d. Or if we look in this table [NOISE], that's 0.4. [NOISE] And then I have another 0.4 from the other two, and that is [NOISE] 0.08. Um, and then I can normalize this. Question? [NOISE] [inaudible] versus 0.6 for c2? So why is this 0.4 versus, uh, 0.6 here? Um, this is because I initialize the parameters so that the probability of r equals 2 givens g equals d is 0.4. So I- I guess I'm wondering why did you use that initialization? Ah, why did I use this initialization? Um, just for fun. I just made it up. [LAUGHTER] Just so that if I had put 0.5's here, then you couldn't tell what was going on. And also, um, as a side mark it wouldn't, um, you know, work, ah, yeah. [NOISE]. So initialization is always random? Initialization is generally going to be random but as close to uniform as, as possible. Yeah. Thank you. Okay. So this is- this column is going to be the probability of, um, basically G equals [NOISE] g, uh, r- R_1 equals R_1, R_2 equals R_2 [NOISE]. Okay? [NOISE] And then the q, um, is going to just be [NOISE], um, is just going to be the weight of this. And normalizing these two, which means that you add them up and you divide, [NOISE] and then you get, uh, was it 0.69 and 0.31? [NOISE] Sorry, the numbers are a little bit kind of, um, awkward but that's what you get there. Ah, I'm not gonna do the second row but it's, it's basically the same type of calculation. Question? [inaudible] kind of like particle filtering in that like in [NOISE] each step, you do like the proposal and then then waiting, and then the end step is like you're res- resampling, like finding the maximum? Um, so the question [NOISE] is, is you use, um, Expectation-Maximization is [NOISE] kind of like particle filtering because you have this proposal and you're reweighting. Um, structurally, it's quite different. I mean, there is a sense like there's some, um, you're proposing different options. But certainly, the two algorithms are meant to solve very different tasks. One is for learning and one is for, um, probabilistic inference. Yeah. Okay. So let's look at the M-step now. Okay. So the M-step. Uh, I'll let you guys fill this in. So the M-step- [NOISE] um, I actually wanna keep this up. [NOISE] Let me do the M-step over here. [NOISE] Is going to- now, just take, ah, these examples. [NOISE] So these weights are 0.5. So it's, it's like someone handed you fully labeled data. Complete observations, four points but each of observation has a weight. So now, the only difference when counter normalized. Instead of just adding 1 whenever you see a particular configuration, you are gonna add the weight. Okay. So [NOISE] for each of the parameters, I'm going to look at, um, so this is gonna be [NOISE], uh, g probability of g. [NOISE] So this can be either c or d. Um, [NOISE] and to get this I have- let me first do the count I guess. Well, okay, let me just add it here. So I look at my data. [NOISE] So let me mark this. So this is now my kind of, um, I'm gonna put data in quotes because I didn't actually observe it, I just hallucinated It. And these are the weights [NOISE] which are associated with the data points. So now, I'm going to look at g- uh, look at g equals c. So I see c here, c here, and I have, um, the weights 0.69, um, [NOISE] 0.69 plus 0.5 probability of- sorry, I just count. [NOISE] And then for d. I see a d here and I see a d here. That's 0.31 [NOISE] plus 0.5. [NOISE] And then I normalize this and I get my estimate. Okay. So on the slide, this is exactly what I did here. I count and normalize. And I'm gonna- not gonna do this table, um, but you, hopefully get the idea. Yeah. [inaudible] [NOISE] estimate for all possible assignments of h? Yeah. So in each step, you have to estimate for all possible assignments to h. So in this case, h is only one variable. Um, in the next example, h is going to be in a hidden Markov model, all the variables, and we're gonna see how we can deal with that. Yeah? Can you explain briefly how we get the values in the second table? [NOISE] The second table with this one? [inaudible]. This one? Yes. Yeah. Sure. Just, uh, briefly- so you look at these datapoints, um, and you see, ah, c1. Okay. So where does c1 show up? So g equals c here, r_1 equals 1 here, so that's a 0.5 here. And then what about c2? Where does c2 show up? c2 shows up twice here with r1 and r2. So that's why there's two of, ah, 0.69's here. And, uh, c2 shows up once here with a weight of 0.5. So everything in the [inaudible]. Yeah. Everything here- these counts are based on the table from, ah, the a step there. [NOISE]. Okay? All right. So let's do something a little bit fun now. [NOISE] Um, so the Copiale cipher is this 105 page encrypted volume that was discovered and people date it back from the 1730s. So it looks like this. So it's unreadable because it's actually a cipher, it's not meant to be just read. Um, and for decades, people were trying to figure out, you know, what is the actual, uh, message that was inside this text. It's a lot of text. Um, and, and finally in 2011, um, some researchers, um, actually cracked this code [NOISE] and they actually used, um, EM to help do this. So I'm gonna give you a kind of a toy version of using EM to do decipherment. And it turns out that this code is, um, uh, basically some book from a secret society, which you can go read about on Wikipedia if you want. Um, so substitution ciphers. A substitution cipher consists of a substitution table which specifies for every letter, assume we have only 26 right now, um, a cipher letter, okay? And the way you apply a substitution table is you take a plaintext which generally is unknown, if you're trying to decipher like, you know, let's say hello world and you look up each letter in the substitution table and you write down the corresponding thing. So h maps to n. So you write n, e maps to m, so you write down m and so on. And so someone did this and then they obviously didn't give you the plaintext. They give you the ciphertext. And so now all you have is a ciphertext. And obviously, he didn't give you the substitution table either. So you- all you have is the ciphertext and you're trying to figure out both the cipher, a, a substitution table, and also the plaintext, okay? So hopefully this, uh, pattern matches something. And let's try to model this as a Hidden Markov Model, okay? So here, the hidden variables are going to be the characters in the plaintext. This is the actual sequence that the hello world. And then observations are the characters of the ciphertext. So familiar equation, the joint distribution of the Hidden Markov Model is given by this equation, and we want to estimate the parameters which include p_start, p_trans, and p_emit. Okay. So how do we go about, um, doing this? So we're gonna approach this in a, um, in a slightly different way, I guess, than, uh, simply just running the algorithm because we have, um, additional structure here, right? So the probability of start, I've no idea, So we just set it to uniform. The, the probability of transition. So this is interesting. So normally if you're doing EM, you don't know the probability of transition. But because we know- let's say we know that, uh, the underlying text was English, we can actually estimate the probability of a wo- of a character given a previous English character from just English text lying around. So this is cool because it allows us to hopefully simplify the problem. Um, I, I should maybe comment that in general, unsupervised learning is really, really hard. And just because you can write down a bunch of equations doesn't mean that will work. And, and, and it's very hard to get it to work. So you want to get as much, um, kind of supervision or, uh, information as you can. Okay. So finally, e- p_emit is the substitution table which we're gonna derive from EM, okay? So now the, the- this might not type check in your head because a substitution specifies for every letter, one other letter, the cipher letter, but in order to fit into this framework, we're going to think about, uh, a distribution over possible cipher, uh, letters given a plaintext letter, okay? With the intention that well, you know, if it's doing its job, it can always, uh, put a probability 1 on the actual cipher letter, okay? So just, you know, stepping back away from the formulas, um, why might you think this could work? And in general, this is not obvious that it should work. But, but what, what you have here is, um, a, a language model p_transition, which tells you kind of what plain text looks like, right? If you gen- generated a cipher and you say, ah, this is my cipher and you get some garbage, then it's probably not right. So we have some information about what we're looking for. Like, so like when you solve a puzzle, you know when you kind of solved it. Um, and then finally we also have this emission that tells us that each, uh, letter that E has to go to kind of be substituted in the same way. So you can't have E going to, you know, different- completely different things at different, uh, points at a time. Okay. So, um, so for doing estimation in HMM, we're going to use the EM algorithm. And remember, in the E-step, we're just doing probabilistic inference. And we saw that forward backward gives us probabilistic inference. In particular, it gives you- for every, um, position, I'm going to give you, uh, my guess which is a distribution of, uh, possible hidden variables. So remember at each position, I observed a cipher letter, and I'm going to guess a distribution over the plain text letter, okay? And then the M-step, I'm just going to count and normalize as we did before. So once I've guessed the cipher letters, I can just, um, compute the probability of some cipher letter given a plain text letter, okay? So I'm actually going to code this up, um, and see, um, it in- so you can see it in action. Okay. I only have five minutes here. So- to make this quick. Um, okay. So here we have ciphertext, okay? Looks pretty good. Um, we're gonna decrypt this or, or decipher it. [NOISE] Um, and then we also have our, uh, text, uh, which is just, uh, some English text that we, uh, found. Um, and then I'm going to- whoops, I wanna decipher and not encipher. Okay. Um, so there's some utilities which are gonna be useful. So things for reading text, converting them to things- into integers, um, normalization of, uh, weight, and then most importantly the s- implementation of a forward-backward algorithm which we're gonna use. Um, [NOISE] okay, because I'm not gonna try to do that in five minutes. [NOISE] Um, okay. So let's initialize the HMM. And then later, we're gonna run EM on it, okay? So the HMM parameters are- there's gonna be start, um, probability which is p in our notation, probability of start, and this is going to be one over the number of possible, uh, letters here, [NOISE] Um, for this is just of- a uniform distribution, uh, 1 over k. And I should say k is 26 plus 1. So this is, uh, lowercase letters plus space. [NOISE] Okay. So now we have our, um, transition probabilities. Um, and I'm going to- this is gonna define a distribution over, uh, um, hidden variable, [NOISE] uh, our next hidden variable given previous hidden variable. And notice that the order is reversed here because I wanna first condition on each one. And then this the- thing is gonna be a distribution. [NOISE] Um, so to do this, I'm going to first, um, remember my strategies. I'm gonna use the language model data to, to count. So I'm going to have, um, again, set things to, uh, 0 for h_2 and range k for h_1 and range k. So this basically gives me a, a matrix of k by k matrix of all 0s. [NOISE] Um, and now I'm going to get my raw text, um, and I'm gonna read it from, um, lm.train which, remember, looks like this. Um, and I'm gonna convert this into a sequence of integers where, um, they're gonna be 0 to k minus 1. Um, and then I'm going to- how do I estimate the transition? Uh, again, counter normalize. So I'm going to go through this sequence and, um, [NOISE] and I'm going to count every successive transition from one character to another. And I'm going to, um, so I'm going to- at position I, I have a character which is rawText of i at position i plus 1, I have another character h_2 [NOISE] and I'm just going to increment this count. [NOISE] Okay? And finally I'm gonna normalize this distribution. So, um, transitionProbs equals, um, so for every, um, h1, um, I have- I have a transition counts of h1, and I can call the helpful normalize function which takes this distribution and normalizes it. Okay? So this is just doing fully observed maximum likelihood count normalized on just the, um, the plain text. Okay? [NOISE] Um, all right. So now emissionProbs, um, this is going to be probability of emits of e given h, and I'm going to just initialize these to uniform because I don't know any better. Um, so 1 over K for e in range K for range K. So it just so happens that both the, um, hidden variables and observed variables have the same domain, um, that's not generally the case. Okay? So now, I'm ready to run EM. So, um, gonna do 200 iterations just- just put in a number. Um, so have the E-step and the M-step. So in E-step, remember I'm gonna do probabilistic inference. I'm gonna call forward backward. And remember this is, um, this is going to be basically in our notation at the position I- what is my distribution over hidden states? So this is actually forward backward, um, oh, I need a, um, read some observations and read my ciphertext, uh, ciphertext, and I'm going to convert this again into an integer array. Um, and then I have observations, and then I pass in the parameters of my, um, hidden Markov model, um, and then, I have my guess. So let me print out what that guess is. Okay? So what I'm gonna do here is for every position, I'm going to get the most, uh, likely outcome of h. So util of argmax of q i, um, for i in range, um, length of observations. Okay? So that gives me an array of guesses for each position, um, and then I'm gonna convert that into a string so it's easier to look at, and put them in line. [NOISE] So this is printing on the best guess. Okay. So finally for the M-step, um, I'm go- I have my, uh, q which gives me counts. So now I pretend I have a lot of, you know, data which are weighted by q. So I'm gonna have emission, um, counts equals, uh, same as before. I'm just gonna get a matrix of 0s, um, e in range K for h in range K, um, and then I'm gonna go through the sequence, from i equals range of through the length of the observation sequence, um, and for each, uh, position in the sequence, I'm gonna consider all the possible, um, possible plaintext hidden values, [NOISE] and I'm going to increment, um, h, observations of i, so this is the observation that I actually saw. Um, and this should be qih which is the weight of, uh, eight h at position i. Okay? And then finally I'm just going to normalize. So, um, like what I did, uh, well, okay I'll just normalize it here. So emission probabilities is util dot, um, normalize of the accounts, um, for h in range K. Okay? And I think that's, uh, pretty much it. Um, lemme me see if this- okay. Um, this should be [NOISE] for for e but- right? Okay. Okay. Good. Um, okay. So let's ru- run this and then I'll go back to the- lemme just go over that code just one more time. Okay. So, [LAUGHTER] uh, we initialize the, the probabilities, to uniform. For transition probabilities we can use our observations, uh, or the- just a plaintext. We just count and normalize. In emissions, um, we initialize with uniform, and then while running EM, we read the ciphertext, and then in the E-step, we're going to use the current prob- parameters to guess where the ciphertext is. And then we're going to update our estimate of what the parameters are given our guess of what the ciphertext is. Okay. So here's the final moment of truth. Uh, the ciphertext- and so remember each iteration it's going to print out the best guess. So hope, it'll look like gibberish for a little bit. It's not gonna be perfect, um, but this is- it starts to look somewhat like English. Um, there's an and, and in my in there, um, anyone can read this? [BACKGROUND] Alone without- okay, that looks like English. Can be this like anyone that I could. Anyone to guess what this text is? So here's the plain text. So I've lived my life alone without anyone that I could really talk to until I had an accident with my plane and the, um, does with my- whatever something. [LAUGHTER] But so, again, unsupervised learning doesn't- it's not magic. It doesn't always work, but here you at least see some signal, and in the actual application, they got partway there and then, you can iterate on this man- kind of in a manual way. [BACKGROUND] Okay. [BACKGROUND] Oops. All right. So that's for it for Bayesian networks. On Wednesday we'll do [NOISE] logic. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Bayesian_Networks_2_ForwardBackward_Stanford_CS221_AI_Autumn_2019.txt | All right, let's get started. So we're gonna continue talking about Bayesian networks which we started on Monday. Um, and just a kind of quick recap, uh, we've been talking about Bayesian networks which is a new paradigm for defining models. Um, and what is a Bayesian network? Uh, you have a set of variables which are nodes in a graph. For example, uh, whether you have a cold, whether you have allergies, whether you're coughing, whether you have itchy eyes. These nodes are related by a set of directed edges which capture various dependencies. For example, itchy eyes is caused by allergies but not cold or by cough. [NOISE] And then formally for every variable in the Bayesian network, you have a local conditional distribution which specifies the distribution over that variable given the parents. So the parents of cough are cold and allergies. So what, you would have a local conditional distribution of p of h given c and a. You do that for all the variables, and finally you take all the factors or local conditional distributions and you multiply them together and you get one glorious joint distribution over all the possible variables in your distribution. Okay? So in other words, to sum it up you can think about Bayesian networks as factor graphs plus probability. They allow you to define ginormous joint distributions over lots of random variables, uh, using factor graphs which allow you to specify things very compactly. And moreover, we saw glimpses of how we can use the structure factor graphs to permit efficient inference. So probabilistic inference in Bayesian networks is the task of- given a Bayesian network, which is, you know, this oracle about, uh, what you know about the world, um, and you look at some evidence that you've found. So it's, you know, it's raining or not raining, or, [NOISE] or you have itchy eyes or so on. And you condition on that evidence, and you also have a set of query variables that you're interested in asking about. And the goal is to compute the probability of the query variables conditioned on the evidence that you see, big E equals little e. Remember lower- upper cases, random variables, lowercase, um, is actual values. Um, and so for example, um, in the coughing case there's, um, the probability of a cold given the fact that you're coughing but don't have itchy eyes, okay? And this probabi- this probability is defined by just the laws of probability which we went over, uh, the first, uh, slide of last lecture. Um, and the challenge is how to do this efficiently, okay? And that's going to be the topic of, uh, this, this class. So any questions about the basic setup of what Bayesian networks are and, um, how do you- what does it mean to do probabilistic inference? Okay. One maybe kind of a high-level note about Bayesian networks is that I, I think they are really, um, powerful as a way to describe kind of knowledge. I think a lot of, uh, AI today is focused on particular tasks where you define some outpu- inputs, and you define some outputs, and you train their classifier. And the classifier that you train can only do this one thing, input, output. But the paradigm behind Bayesian networks and, you know, databases in general is that you have to develop a kind of a knowledge source which can be probabilistic, it's captured by this joint distribution. And once you have it, you use those tools of probability to allow you to answer arbitrary questions about it. So you can give me any pieces of evidence and any query and it's clear what I meant to do, I'm supposed to compute these values. So this is- it's kind of a more flexible and powerful, uh, paradigm than just, you know, converting inputs and outputs. And so that's why I think it's so interesting. Okay. So uh, today we're gonna focus on how to compute these arbitrary inference queries efficiently. I'm gonna start with forward-backward and particle filtering, these are going to specialize to the specific, uh, Bayesian networks, uh, called HMMs or Hidden Markov Models. And then we're gonna look at Gibbs sampling which is a much, uh, you know, more general way of doing things. Okay. So um, a Hidden Markov Model which I talked about, uh, last time, um, which we're going to go in more detail at this time is, uh, a Bayesian network where there exists, uh, a sequence of hidden variables and a corresponding sequence of observed variables. So as a kind of motivating example, imagine you're tracking some sort of object, um, in your homework you'll be tracking cars. So uh, Hi is going to be the location of the object or car at a particular time step i. And Ei is going to be some sort of sensor reading that you get at that particular time step. It could be the location plus some noise, or some sort of distance to that, um, the true object, um, and so on. Okay, so these are the variables and, uh, it goes well as saying that the hidden variables are hidden and the observed variables are observed. Um, so the distributions over, uh, this- all the variables are specified by three types of local conditional distributions. The first one is just the starting distribution. What is the probability of H_1? Um, this could be uniform over all possible locations just as an example. Um, and then we have the transition distributions which specify what is the distribution over a particular hidden variable, the location of the true location of object H_i given H_i minus 1. So this captures dynamics of how this object or car might move over time. Um, for example, it could just be uniform over adjacent locations, right? So cars can't teleport, they can only move to adjacent locations over uh, one timestep. And finally, we have emission distributions which, uh, govern how the sensor reading is, uh, computed as a function of the location. Okay? Um, so this, again, could be something as simple as uniform over adjacent locations, um, if you expect to see some noise in your sensor. The sensor doesn't tell you where exactly the car is, but it tells you approximately where the car is. And the joint distribution over all these random variables is going to be given by simply the product of everything you see on the board. I'm just gonna write this up on the board just for, you know, reference. Um, so we have the probability of H equals h. So this- when I write H equals h that means, uh, the H_1 through H_n. So all the random variables, uh, all the hidden random variables, all the observed random variables, and this is by definition equal to, um, the, the start distribution. Um, let's see just to make sure I have- okay, good notation here. So start distribution over, um, h_1, and then I have the transitions, i equals, um, 1 to- uh, I guess, 2 to n, just to make sure, okay. So this is the probability of, um, h_i given h_i minus 1. And then finally I have for every time step I through 1 through n. I have the probability of an observation given, um, h_i. Okay? So multiply all these factors together, that gives me a single number that is, uh, the probability of all the observed and all the hidden variables. Okay? Any questions about the definition of a Hidden Markov Model? [NOISE] Okay. So given we have one of these models, remember with a Bayesian network I can answer any sort of queries. I can ask what is the probability of, um, H_3 given H_2 and E_5, and it can do, do all sorts of crazy things. And all of these things are possible and efficient to, um, you know, compute but we're going to focus on two main types of questions. Uh, motivated by the, uh, let's say object tracking example. The first question is filtering. Filtering says, "You're at a particular time step, let's say time step 3. What, what do I know about the true object location H_3 given all the evidence I've seen up until, uh, now?" So this is kinda real-time object tracking, at each point in time you look at all the evidence and you want to know where, um, the object is. Uh, a similar question is smoothing and you're still looking at a particular time step, um, um, three, let's say. Um, but you're conditioning on all the evidence. So you're looking at all the observations, and you're looking at- you're kind of thinking about, uh, this more retrospectively. Where was object- where was the object at time step 3? So think about if you're trying to reconstruct the trajectory or something. Okay? So this is called filtering and smoothing. Um, so let's now try to develop, um, an algorithm for answering these type of queries, and, um, without loss of generality, I'm going to focus on answering smoothing queries. Um, so my- why is it the case that if I tell you I can solve all smoothing questions, I can also solve all filtering questions? [NOISE] [inaudible] So it is true so that this- in filtering this is- the evidence is a subset. But the answers are going to be different depending on what evidence you compute on. So you can't literally just use one as the answer for the other. Yeah. You marginalize over the things that you, uh, like E_45 to get to- Yeah, yeah, so you marginalize. That's- that's the key idea. Is that suppose I had a smoother and I wanted to answer this filtering query, right? So this is H_3 given E_1 to E_2, E_3, right? Remember last time we talked about how you can take leaves of Bayesian networks which are not observed and just essentially wiped them away. So if you don't observe E_4, E_5, H_4, H_5, you can just pretend those things, you know, don't exist. Right. And now you're back to a smoothing query where you're conditioning on all the evidence. [NOISE]. Okay, so we're gonna focus on smoothing and to make progress on this problem, I'm going to introduce a representation that's going to help us think about the possible assignments, right? And just to be- be clear, right. There's- the reason why this is not completely trivial is that there are for- if you have N hidden variables, there's 2 to the N or exponential N number of possible assignments. And you can't just enumerate all of them. So you're going to have to come up with some algorithm that can compute it more efficiently. Okay, so what we're gonna do is introduce this lattice representation which is gonna give us a compact way of representing those assignments. And then we can see how we can operate on that representation, okay? So this is gonna smell a lot like a state-based model. So we're kind of going backwards, uh, but hopefully it'll make sense. So the idea behind a lattice representation is that I'm going to have, um, a set of rows and columns. So each column is going to correspond to a particular variable. So the first column is going to correspond to H_1. And each row is going to correspond to some setting of that variable. So there's two possible things I can do. I can either set H_1 equals one or I can set H_1 equal to 2. I'm- the version I'm drawing on the board is going to be a simplification of what I have on the slides just, uh, in the interest of space. And the second column is going to be either H_2 equals 1 or H_2 equals 2. So by going through these, uh, lattice nodes which are drawn as boxes, I'm kind of assigning random variables to a particular value. Okay, so I'm going to connect these up. So from this state I can either set H_2 equals 1 or 2. Here I can also go from to 1 or 2, and finally let's just do H_3 equals 1, H_3 equals 2, and similarly I can choose either one of them from no matter where I am. And finally I have an end state, okay? So first notice that the size of the lattice is reasonably well controlled. It's simply the number of timesteps times the number of values that a variable can take on. So let's suppose that there's N timesteps and K possible let's say locations, so values of H_I. So how many nodes are here? K times N. K times N, right? Okay, so that means we can- essentially, this doesn't blow up exponentially. Okay, so now let us interpret a path from start to end. What does a path from start to end tell us? So let's take- let's take this path. What does this tell us? Yeah. It's like a particular assignment of arranged variables. Yeah, it's a particular assignment of the variables. So this one says, set H_1 to 1, H_2 to 1, H_3 to 1. This path says set H_1 to 2, H_1 to 1, and H_3 to 2, and so on. Okay, so every path from start to end is an assignment to all the unobserved or hidden variables. Okay, so now remember each assignment comes with some sort of probability. So we're going to try to represent those probabilities, um, juxtaposed on this graph. Okay. So I'm gonna go through each of these edges. So remember these are the probabilities. So every assignment has- is a product of the factors. And I'm going to basically take these factors and just sprinkle them on the edges at the points where I can compute the factors, and I'll explain more what I mean by this. Okay, uh, so- so maybe one- one kind of preliminary thing I should mention is suppose for this example we have- we are conditioning on E_1 equals 1, E_2 equals 2. Let's say E_1 equals- sorry E_3 equals 1, okay? So I'm conditioning on these things. Notice I'm not drawing them in here because these are observed variables. I don't have to reason about what values they take on. I'm only going to consider the hidden variables which I don't know. But this is just going to be some sort of reference. Okay. So let's start with- start H_1 equals 1, right? So if you remember, uh, backtracking in CSPs, right? We basically took factors, and then whenever we could evaluate the factor of, we just put it down on that edge in the backtracking tree. So here what- what can we do we have the probability of H_1 equals 1, okay? And then we also have the probability of the- the first emission probability I can compute, right? So that's the probability of E_1 equals the evidence I saw which is 1, given H_1 equals 1, which is the value that I've committed to here. So this is a number that is essentially the weight, or cost, or score, or whatever you wanna call it that I, um, am going to incur when I traverse that edge. Okay, so what about this one? So I have the transition from P- let's say, uh, H_2 equals 1, given H_1 equals 1, and times the probability of E_2 equals whatever I observe which is 2, given H_2 equals 1. And similarly over here I have probability of H_3 equals 1 given H_2 equals 1 times the probability of E_3 equals 1, which is whatever I observed, given H_3 equals 1. And then over here there's no more factors, uh, left, so I'm just going to put 1 there, okay? And you can check that when I traverse this path and I multiply all these probabilities together, that's exactly this expression for H_1 equals 1, H_2 equals 1, H_3 equals 1. E_1 equals 1, E_2 equals 2, and E_3 equals 1. And for each of these edges, I have an analogous quantity depending on the values that I'm dealing with, okay? Any questions about this basic idea? So in the slides, this is basically what I just said. Okay. So- so now, um, now what I am trying to do now is to- let's say I'm interested in, um, you know, smoothing. So I'm interested in what is the probability of H_3 equals 2, right? Actually, let's- let's do the example on the board just- just, uh, because that's the one I'll actually do. So suppose I'm interested in the probability of H_2 equals, uh, let's say 2 given the evidence. E_2 equals 2, E_3 equals 1. Okay, so this is the query I'm interested in, you know, computing. Okay. So how can I interpret this quantity in terms of this lattice, right? So this is- there is this H_2 equals 2 here, right? That's somehow privileged and then asking, you know what is the probability of this given the evidence? Yeah. Sum over all the probabilities [inaudible]. Yeah, so sum over all the probabilities of the paths, right? So remember every path through from start to end is an assignment. Some of those paths go through this node which means that H_2 equals 2, and some of them don't, which means that it's not true, right? So if you look at all those paths and you sum up their weights of this node and divide by the sum over all paths, then you get the probability of H_2 equals to 2, given the evidence. Okay. So let me just write this. This is going to be, uh, sum over, um, colloquially, sum over paths through, um, each two equals 2 divided by sum of over all paths. Okay. So now, the problem is to compute the sum over all paths going through, um, h_2 or not going through h_2. Okay, again we don't want to sum over all the paths literally because that's going to be, um, exponential time. So how can we do this? Yeah. When you say sum, do you mean sum of the weights or some of the counts like how many [inaudible]? Right. So what I mean by sum, I mean sum of the weights. So every path has a weight which is the product of the weights on the edges and you sum of the- those weights, yeah. Okay, so what's an idea that we can use to compute the, the sum efficiently? Sum, key word. Dynamic programming. Yeah, dynamic programming. Okay, so this, um, we're gonna do this kind of recursively. Um, let me just show this slide. So, er, it's gonna be a little bit different from the diamond programming that we saw, er, before it's gonna be, um, more general because we're not computing and let's say, ah, one particular query but I'm going to compute a bunch of quantities are gonna allow us to compute all queries essentially. Um, okay, so let's- there's going to be three quantities I'm gonna look at, um, and hopefully I can [NOISE] kind of out of colors but, um, let's not use green for this then. So there is going to be, um, you know, forward messages, um, F which I'll explain a bit. There's going to be backward, um, messages B. Okay, so what I want to do is, ah, for every node I'm gonna compute two numbers. Yeah, question? No. Um, I have- I think I'm okay. What color is that? Um, sure I'll take a blue marker, yeah. Great. Thanks. Okay great. So, um, let's call this S, okay. Um, okay so, um, so for every node I'm gonna compute two numbers. One number we'll call the forward number is- or the forward message is going to be the sum over all the weights of the partial paths going into that node. And the orange number is going to be the sum of all the partial paths from that node to the end. Okay. So let's, let's, um, so these are all meant to be probability. So the number should be less than 1 but just to keep things simple, I'm gonna put actual numbers on these just two, um, uh, just two integers on them, so they don't have to carry around decimal points. Okay, so this is one, let's say, one, two, one, um, one, two, one, uh, one, two and one. Okay, so remember every edge has a weight associated with it. And so now let me compute, um, the forward probability. So what is the sum of all paths going into that node? It's just 1, right? Okay, so, ah, sorry forward is green. So one and this is two, okay just copying this. And now recursively what is the p- um, sum of all the weights going into this? Okay, So, um, I could have come from here or I could have come from here. Right, so if I come from here it's one times, um, whatever the sum was there. So that's 1 times 1, that's 1, ah, plus 2 times 2. Okay, so I'm gonna get a 5. 1 plus 4. And here, I'm going to have, ah, 1 times 1 so that's 1, 1 times 2, that's 2. So that's going to be 3. Hopefully, you guys are checking my math here. Um, so what about this node? So now recursively this could have- the paths going in here could have come from this node or that node. So that's 2 times 5, so that's 10. 1 times 3 and that's, ah, 3. So that's 13. And this is, uh, 1 times 5 plus, uh, 3 times 2. So that's, um, 11. Right, okay. So 13 represents the sum over all paths going into H_3 equals 1, okay? So I can do the backward direction [NOISE]. So, um, so this is going to be, ah, in orange, the backward messages which are, um, paths going to the n. So this is gonna be 1. So here, I'm going to have, ah- so 2 times 1 plus 1 times 1. So that's a 3. This is 1 times 1, 2 times 1. So that's a 3 as well. Ah, this is 1 times 3, ah, 1 times 3. So that's a 6. This is 2 times, that's a 6 and a 3. So that's a 9, ah, and then I'm, you know, done. Okay. Okay. Does that make all sense? So these are kind of compact representations over the essentially the flow in and out of, ah, these lattice nodes. Okay, so the, the kind of the s- kind of magic happens, um, when I have, um, these axes. So now for every node, I'm going to also just multiply them together, okay? So that's gonna be 6, uh, 18, um, 18, as 9, 13, 11 and that's it, okay? So what happens when I multiply them together? Let's take another look at this node, right? So what does 9 represent? 9 represents the sum over all the paths going through here, right because I can take whatever paths I have coming in and I can take whatever paths I have going out and any sort of combination of them will be a valid end to end path. Okay, and so this total weight is, you know, 9 there. Yeah? Why instead of sum we multiply for this case? So why do we multiply instead of sum here? Um, because we're multiplying, the weight of a path is the product, okay? Mathematically, what's going on is, um, exactly factoring. Right, so I suppose I had numbers, let's say a, um, b and c and d and I could choose a and b and c and d. So what are the possible- so then I can do a plus b times c plus d, right? Which is the sum over all possible paths and, uh, you can- thus paths are either ac, uh, ad, bc and bd. Right, so I'm basically doing this th- computing it in a factorized way rather than expanding out. That's mathematically what's going on when I multiply the forward and the backward messages. And why are these called messages? So the idea of messages, uh, comes from the fact that you can intuitively think about the forward messages as being kind of sent across the graph, right? Because the message here depends only on the neighbors here. And once I get these messages, I can compute the f- the- my messages are next time-step based on that. So it's kind of a summary of what's going on and I ca- I can send the messages forward and same in the backward direction. Okay, so now once I have these values, uh, how do I go back and compute my query? Sum over all paths through h_2 equals 2. What is that? 9, right. And over the sum of all our paths, what's the sum over all paths? Sorry, this should be 15. I was wondering, did I screw anything else up? I think that's right. I was checking because you know, when if you sum these two numbers you get 24 which is all of the sum of all paths going through here and that better be the same number here and also here would be the same number there, right? Okay. Someone that should have caught that. Okay, um, all right, so these are all the paths going through nine, er- oh, sorry, going through this node. And, um, if you look at all paths, that's going to be 15 plus 9, and that's going to be 24. Okay, so final answer is a probability of h_2 equals 2 given these made-up, uh, weights is going to be 9 over 24. Okay, any questions about that? [NOISE] Okay, so le- let me just quickly go over the slides which is gonna be a more mathematical treatment of what I did on the board. Hopefully, one of the ways will resonate with you. So define the forward messages for every node is going to be a sum over all of the values at the previous time steps of the forward message at our previous timesteps times the weight on the edge from, uh, the previous value to the, the current value? Um, the backward is gonna be defined similarly for every node. Sum over all the values assigned to the- at the next time step, all outgoing edges, um, of the backward message at the next time step times the weight into that next time step. And then define S as simply just the product of F and B. Okay, so that's what I did on the board. And then finally, if you normalize the sum at each point in time, you can get the, uh, distribution over the hidden variable or given all the evidence. And to summarize the algorithm, the forward-backward algorithm, this is actually a very old algorithm, um, developed actually for, ah, for speech recognition, a while back. I think it's, you know, probably in the '60s or so. Um, so you sweep forward, you compute all the forward messages, and then you sweep backwards and compute all the backward messages, and then for every position, you compute S_i for each i and you normalize. So the output of this algorithm is not just the answer to one query, but all the smoothing queries you want. Because at every position, you have the distribution over the, um, the hidden variable h_i. And the running time is n times, ah, k squared, ah, because there's n time steps and every time step you have to compute the sum. So for a k possible values here, you look at k possible values there. So that's a k squared, and it's n times k squared. Interestingly if you ask, okay, what's the cost of computing a single query? It would also be n times k squared. So it's kind of cool that you compute all the queries in the same time that it takes to compute a single query. Okay. Question? [inaudible]. So, uh, question is does this only work for Hidden Markov Models or is it more general? There's certainly adaptations of this which work, uh, very naturally for other types of networks. And one immediate generalization is if you have not just a chain structure, but you have a tree structure then the idea of passing messages along that tree, um, it's called belief propagation, um, is, uh, just works pretty much out of the box. For arbitrary Bayesian networks this won't work because, ah, once you have cycles, then you can't represent it as a lattice anymore. Any other questions? Okay. So to summarize, this lattice representation, uh, allows us to think of paths as assignments, which is a familiar idea if we're thinking about state-based models. Um, we can use the idea of dynamic programming to compute the sums efficiently but we're doing this extra thing where we're computing all of the sums, um, for all the queries at once. And the forward-backward algorithm, uh, allows you to share intermediate computation across the different queries. Yeah. [inaudible] So the output of this algorithm is, uh, basically the probability of h_i given all the evidence, for every i. [inaudible]. [NOISE]. Oh, so how would you actually use this, do you sample from it? Um, depends on what you want to do with it. So the output of this you can think about it as a distribution at each time step. So it's like n by k matrix of probabilities, right? From that you can sample if you want, uh, you're gonna- you might be only interested in only, uh, various points in time. Um, it's yeah. Okay. So, uh, let's move on to the second algorithm which is called particle filtering. Um, so we're interested still in Hidden Markov Models, um, or the particle filtering again is something actually much more general than that. Um, and we're going to only focus on query- filtering questions. So we're doing our filtering, we're at a particular time step, we're only interested in the probability of the hidden variable at that timestep computation on the past. And why might, um, we not be satisfied with, um, Hidden Markov or the forward-backward algorithm? So here's the motivating picture. So imagine, um, you're doing, uh, the car assignment, um, let's say, and you're, so you're tracking cars. Okay? So cars let's say live on a huge grid, um, so at each position h_i, um, the value of h_i is some point on this grid. But you don't know where it is, you want to track it. Okay. So if this is like 100 by 100, you know, that's 10,000, um, if this were a thing where it continuously would be even, you know, worse. Uh, so this, um, this k squared where k is the number of values could be like 10,000 squared and that's a large number. Right? So, um, even though hidden Markov model with backward, forward-backward is not exponential time, even the quadratic can be pretty expensive. And in the further the, motivation is, you know, you- and you really shouldn't have to pay that much, right? Because let's say your sensor tells you that, oh the car is up here somewhere. And you know cars can't move all the way across here. So then, you know, but the algorithm is going to consider each of these possibilities and most of all these probabilities are gonna have pretty much 0 probability. So that's really wasteful to consider all of them. So can we somehow focus our energies on the region that, um, have actual high probability? Yeah, question? Is that way of like saying do you think of backwards like for the later timesteps you can't have 0 in one of those positions in the original table? The question is can you go backwards. Like if you can, like if you're continuing to do it one way and you say like it's very unlikely that I'm gonna go back to the starting position, do- do each of those variables happen in the same domain, have to say? Oh so each of these variables, they don't have to be from the same, um, domain. For this presentation, they're in the same domain just for simplicity. Um, but I think what you're asking is you know that maybe, um, a car only moves let's say forward or something. Then there is some restriction on the domain. Um, it's not gonna be that significant because you still don't know where the car is. So, uh, it doesn't really cut out that many possibilities. Yeah, maybe by a factor of 2 or something but that's not, um, that significant. Yeah. Okay, so how do we go about making this a little bit more efficient? Um, so let's look at beam search. So our final algorithm is not gonna be beam search, it's gonna be particle filtering but beam search is gonna give us some inspiration. So remember in beam search we keep a set of k candidates of partial assignments, um, and algorithm as follows. You start with a, a single empty assignment, and then for every, um, position time step, um, I'm going to consider all the candidates which are assignments to the fi- first i minus 1 variables. I'm gonna extend it. There's possible ways of extending in our setting h_i to v from any v in the domain of i. Um, so now I'm going to amass this, uh, set of, um, extended assignments, um, now have k times as many because each previous assignment got expanded by k. So I'm gonna prune down. I'm just going to take all of them, sort them by weight and take the- the top k. Okay? So visually, um, remember from last time it looks like this. So here is, uh, this object tracking, um, where we have five variables and you start with beam search which is, um, assigning X_1 to 0, um, or 1 and then, um, you extend it. So you extend the assignments, you prune down, you extend assignments, you prune down, you extend the assignments and prune down, and so on. And at the end of the day, you get k candidates. Each candidate is, uh, full assignment to all the variables and it has a particular weight, which is its actual weight. And at each intermediate time, it's a partial assignment to only the prefix of, uh, i random variables. Okay? And remember that beam search doesn't have any guarantees. It's just a heuristic, but it, uh, often it works well in practice. And the pict- picture you should have in your head is that you have the exponentially sized tree of all possible assignments and beam search is kind of this pruned, uh, breadth-first search along this tree which only looks at promising directions and continues, um, so you don't have to keep track of all of them. Okay. So at the end, um, you can use beam search, you get a set of candidates which are full assignments to all the random variables and you can compute, uh, any quantities you want. Um, uh so the problem with this is that it's slow. Um, for the same reasons as I described, uh, before it requires considering every possible value of, um, H_i. So it's a little bit better than forward-backward, right? So for forward-backward, um, you have to have the domain size times the domain size and now for beam search, um, it's the size of the beam times the domain size, you know, which is better. But- but still I think we can do a little bit better than that. Um, and finally there's this kind of more subtle point is that, um, as we'll see later really taking the best k might not be the best thing to do because you want some- maintain some diversity. Right. Just a kind of a, uh, quick visual. So suppose you, um, your beam consists of only cars over here. It's kind of a little bit redundant but you might want, uh, kind of a broader representation. Okay. So the idea with particle filtering is to just tweak beam search a little bit, um, and this is going to be expanded into three steps which I'll, um, talk about. Okay, so let me, um. Does anyone need this on the board? Can I erase it? Okay. We're good. [NOISE] Anyway. You can look at video if you ah, don't remember. [NOISE] Alright. So there's three steps here. Okay. And we're gonna try to do this ah, pictorially over here. Um, [NOISE] so, so the idea behind particle f- filtering is, I'm going to maintain a set of particles that kind of represent where I think the object, ah is. So imagine [NOISE] um, [NOISE] you know the object starts over here somewhere. So you have a set of particles, and I'm gonna iteratively go through these three steps. So propose, [NOISE] um, w- w- w- weight, [NOISE] and um, ah re-sample [NOISE]. So this is meant to be kind of a replacement of the extend-prune uh, strategy for Beam search. Okay, so the first, step is to propose. So at any point and time particle filtering maintains a set of partial assignments, known as particles that kinda tries to mimic a particular distribution. So um, to kind of jumping to the second time step. We can think about this, set of particles as representing the probability of H1 and H2 given the evidence, you know, so far. Okay. Um on the board, I'm only gonna draw the, ah the particle representing the value H2, um because it- it's hard to draw trajectories but you can think about um, ah really particle filtering maintains this lineage as well. Okay, so the key idea of the proposal distribution is that, okay, we want to advance time, now. So um, we're interested in H3 but we only have H2, so how do we figure out where H3 is? So we, propose possible ah, values of H3 based on H2. So this is idea of proposal. We just simply draw H3 from this transition distribution. So this- remember this distribution is- comes from an HMM. This is d- you're given HMM, so you can do this. Um, and this gives us a set of new particles which are now extended by 1. And this represents the ah, distribution H1, H2, H3, given the same evidence. Okay? So pictorially, wh- ah, you should think about propose as um, [NOISE] So propose is kind of taking each of these particles and sampling according to th- the transition. So think about the particles as um, you know ah, just moving in some direction. It's almost like simulating where, you know cars are, are, are going. And this is done in kind of stochastically and randomly. Okay. Um, step two is to wait. So, so far the new locations really don't represent ah, reality. Right? Because we also see E3. At timestep three we get a new observation that hasn't been incorporated somehow, into this. We're just kind of s- simulating what ah, might happen um, and so the idea here behind a weighting is for each of those particles, we're gonna assign a weight now, which is equal to, the mission distribution over E3 given, you know H3. Again, this is emission distribution which is given by our HMM so we can just evaluate ah, it whenever we feel like it. And the set of new particles w- which are weighted, can be thought of representing this distribution where now we have condition on um, E3 equals 1. Okay, so now each of these particles has some, you know weight. So on, on this picture [NOISE] it kind of looks like this, um, so maybe let's say the ah, emission distribution is kind of let's say um, ah, let's say a Gaussian distribution around th- the observation. So suppose, the observation tells you, well, it's over here somewhere, um, which means that these particles are gonna ge- get higher weight and these particles are gonna get lower weight. Um, and if they're far away enough then maybe they get like almost zero weight. So um, I, I'm going to kind of softly [NOISE] x [NOISE] these out. So think about these [NOISE] as um, okay. Mayb- maybe I'll do this. So I'll upweight these, um, [NOISE] and kind of start downweighting them. [NOISE] So nothing really gets ah, zeroed out. But you can think about these as downweighting and these as upweighting provided [NOISE] you have let's say some evidence um, E3 that tells you, your- you're going in that direction. Okay. Okay so the final step is, is really about a resource distribution question. Um, so now we have weighted particles, we need to somehow get back to unweighted particles. Okay, so which one to choose? Um, [NOISE] so imagine ah, you have this situation where you have particles which are kind of distributed, the weights of the particles are fairly uniform. Um, then you could imagine let's take just the particles with the highest weight. Right. This is very similar to what Beam Search would do. You just take all the particles with high weight and ah, just keep those and nothing else. Um, so this is not a crazy thing to do but um, it, it might give you an impression that you are more confident than you actually are. Right, because imagine the, the weights are fairly uniform. So maybe this one is like, you know, 0.5 and this one is like 0.48. Um, so you're kind of just b- breaking ties in a very biased way. Um, so the idea is that if you sample it and instead sample from this distribution, you're gonna get something a lot more representative rath- rather than just taking um, kind of the best. Okay, so how do you sample from this distribution? Um, so the general idea is, if you have- if I- this is just kind of a useful module to have here having. If I give you a distribution, um, over n possible values, then I'm gonna draw, ah, let's say K samples from this. So if I have the distribution over four possible locations with these probabilities or you know, weights, then um, I might draw off ah, four samples, and I might pick a1 ah, and then a2, and then I may pick a1 and a1. So some of these things are not gonna be chosen ah, if they have sufficiently low probability. Okay. So going back to the particle filtering setting, uh, we have these old particles remembering which are weighted. Uh, and first, I'm going to normalize these weights to get a distribution. So add these numbers up and divide by that. And then, I'm going to sample according to the distri- distribution given weights on the previous slide. So I might draw in this case 0, 1, 1 once, and I might draw it again and might not, uh, even keep this particle, right? Um, and the idea here is that suppose a particle has really, really low weight, it has like 0.00001. Then, um, I shouldn't, kind of, keep, um, it around and because it continues to occupy memory and I have to keep track of it. It's basically, you know, gone right. So this re-sampling, kind of, regenerates the pool by focusing their efforts, um, on the higher weight particles. It might even have re-sample a hi- hi- higher weight particle multiple times. Um, um, and then not sample the low weight particles, uh, zero times. Okay. So in this, in this picture here, so re-sampling, uh, might- let's see, uh, how do I, how do I draw this. So maybe, now I have, um, maybe I sample this twice, maybe I sample this twice, and maybe these don't get sampled. Maybe I sample this once, this once, uh, right. So, so the blue represent the particles after this one round of particle filtering. Where I've, kinda, moved the particles over here a little bit and lots of weight from there. So that's why it's kinda called particle tracking because you can think about the swarm of particles representing where the object might be. And over time as, as I follow the transition dynamics and, um, hit it with the observations, I can, uh, move this swarm over time. So that's the picture you should have in your head. Okay. So let's go through the formal algorithm. It's gonna be very similar to beam search. So you start with empty assignment, and then you propose s- where you take your partial assignments to the previous i minus 1 variables. And then I'm going to consider for each one of them just sampling once from this transition distribution and augmenting that assignment. So unlike beam search, where the size of C prime was K times larger than C, the size of C prime is equal to the size of C in this case. Second, I relate so looking at the evidence, and applying the evidence- probability of evidence given the particle H_i, um, gives me a weight for every particle. Um, and then I'm going to normalize this distribution sample K elements independently from that distribution and that redistributes, um, the, the particles to where I think they're more promising. Okay. So let's go through this, um, quick demo. Um, so the same problem, uh, as before, uh, I'm gonna set the number of particles to a 100. So I start with all the particles, um, [NOISE] assigning X1, you know, to 0 and there is 100 copies of them. Um, I extend and notice that some of the particles go to 0, and some of the particles go to 1 with, uh, approximate probability proportional to whatever the transitions are. Uh, and then going to redistribute which changes the, uh, balance a little bit. And gonna extend prune, uh, extend. Uh, by prune, I really mean, uh, ris- re-weight and re-sample. Um, and notice that, uh, the particles kind of get more diver- This is more diverse than, um. Well, it's more diverse than beam search because I'm using K equals 100 rather than like 3. But, uh, but you can see that, um, some of these particles might, like, my, like, this one has 0 weight, so that when I re-sample and they just go away. Do you have a question? Yeah. Why don't we aggregate all the ones into a single category and all the zeros into single category? So just to show the branching pattern, or is it actually relevant? Yeah. So that's a good question. So notice that, um, all of these are, all of these zero for the purposes of, uh, X4 are just pretty the same. So if you only care about the marginals then you can collapse them. You're absolutely right. In this demo, it's- I'm maintaining the entire history so that, yeah, you can show the- see the branching. Okay. So this is that point [LAUGHTER]. If you only care about the last, uh, position and not about the possible trajectories, then you can actually collapse all the particles with the same H_i into, uh, one. And then furthermore, if there's repeats then you can just keep track of the count, right. And this is actually what you would do in your assignment. I am giving you kind of the more general picture in case. Because particle filtering is more general than just, uh, looking at the last timestep, but most of the time you're just interested in the last timestep. Okay. So just a quick, kinda quick, um, visual illustration of this. Um, let's define this factor graph where, oh, you have, um, transitions that are basically 1 or 0 depending on whether h_y- i and H_i 1- minus 1 are close to each other. And O_i is some sensor reading. Um, one thing I've been a little bit, uh, sliding under the rug is sometimes I've talked about local conditional distributions, sometimes I've been talking about factors. Remember from the point of view of inference, um, it really doesn't matter. They're all just, you know, factors, right? So in particular, if I give you a factor graph which- right remember which is not necessarily, uh, uh, a Bayesian network. I can nonetheless still define a distribution by simply just, uh, normalizing. Pick all the weights a mulpti- uh, and normalize and divide by that. And these objects are actually called, uh, Markov networks or Markov random fields, which is another object of study that we're not gonna talk about in this class, but, um, this is actually a more, you know, general way to think about the relationship between factor graphs and distributions. Um, we're only mostly focusing on, on Bayesian networks in this class. But some examples would be more general than that. Okay. So you have this distribution and, um, so you can play with this demo in the slides. If you click then, this yellow dot shows you, uh, the observation at a particular point in time. And the noise, uh, the observation is related to the true position of some particle by based on the noise that you define here. So here I've defined box noise which means that it's gonna be a uniform distribution over a box of, uh, 3 by, uh, 3 by 3. Or I guess a 6 by 6, uh, box. Um, and so if I increase the number of particles to, uh, let's say 10,000, then what I'm gonna show you is a, a red blob, um, that looks like it's trying to eat the [LAUGHTER] uh, uh, the yellow dot. Uh, and this red blob shows you, uh, the set of particles where the intensity is the count of that number of particles in a particular cell. So this swarm, kind of, corresponds to on the board, it's the set of particles. Uh, but since this is discretized, you can, kind of, see the pile of particles, piling up on each other. Uh, and just to see how well this is doing, show true position, er, you can see the blue dot is the actual object position, the yellow dot is the noisy observation, and it's trying to do its best to track where the blue is. It's not perfect, uh, because this is kind of an approximate algorithm, but it kind of gets most of the way there. Okay. Any questions about particle filtering? So to summarize, you can do forward backward if you can swallow computing on a number of domain values times number of domain values. If you have large domains, but you really think that none- most of them don't matter, then particle filtering is, is a good tool because it allows you to focus your energies on the relevant part of the space. Okay. So now, let's revisit Gibbs sampling from a probabilistic inference point of view. So remember, Gibbs sampling we talked about, uh, last week as a way to compute the maximum weight assignment in an arbitrary factor graph, where the main purpose is to get out, uh, of local minimum. Uh, so remember how Gibbs sampling works, you have a weight which is defined for, uh, complete assignments. So unlike particle filtering or beam search, we're starting with complete assignments and trying to modify the complete assignments rather trying to extend partial assignments. Uh, so you loop and you compute. You pick up a variable X_i and then you consider all the possible, uh, um, possible values you can take on, and you choose, uh, the value with probability proportional to its weight. Okay. Let me show you this, um, example now, that uh, we saw last week. Uh, so, so same graph here. Uh, so we start with this complete assignment. Uh, and then we're gonna examine X1. X1 can take on two possible values. For each of these values, I'm gonna compute its weight. And remember in Gibbs sampling, I only need to consider the Markov blanket of that variable. Uh, the factors here are o1 and t1 because that's only thing that changes. Everything else is a constant. Uh, and then I'll normalize and sample from that. Okay. So then, uh, I go onto the next variable and so on. So I sweep across all the variables, and, um, eventually the weight hopefully goes up, but not always up because sometimes I might sample a value that has lower probability. Okay. And at the same time, I can do various things like, um, computing the, the marginal distribution over a, a particular variable. So, or if two variables. So I can, um, basically count the number of times I see particular patterns and I can normalize that to get a distribution over that particular pattern. [NOISE] Okay. So now let's try to interpret Gibbs sampling from a probabilistic, ah, point of view. So instead of just thinking about, ah, a weight as just a function, we can actually think about the probability distribution induced by that factor graph by again summing all the weights over all possible assignments. Normalizing, there you have a distribution. Um, so the way to think about Gibbs sampling is now more succinctly and more actually traditionally written as the following, which is you loop through all the variables, and for every variable you're going to look at the probability of that variable taking on a particular value condition on everything else. So now I can give like, write down this probability which is, you know, a nice way to think about what Gibbs sampling is actually, you know, doing. Um, and the, the guarantee with Gibbs sampling, um, under, you know, some conditions, which I won't get into is that, as you run this for long enough, you, ah, the sample that you get is actually a true sample from this distribution, as if you had sampled from this distribution. And if you did a multiple times, now you can actually compute any sort of marginal, ah, distribution, you know, you like. So now, while there's, that guarantee sounds really nice, there are situations where Gibbs sampling could take exponential time to get there, so caveats. Okay. So let's look at a possible application of Gibbs sampling, image denoising. So suppose you have some sort of noisy image and you want to clean it up. So how can this be, um, helpful? So we can model this image denoising problem as, um, this factor graph where, um, you have a grid of, ah, pixel values, um, and, ah, they're connected, um, in this kind of grid like way. So every value of X_i is, where i is a location, um, two numbers, is going to be either, ah, 0 or 1, um, and we're gonna assume that some subset of the pixels are observed. Um, and in case, ah, we observe it then we're actually just going to have, ah, a factor that is actually a constraint that says that value of X_i has to be whatever we observed. And these- we have these, um, transition potentials that say neighboring pixels are more likely to be the same than different. So it assigns value 2 to pixels which are the same and 1 to pixels which are different. Okay. So is the model clear? So now let's try to do Gibbs sampling in this, in this model [NOISE]. Just to give you, um, a concrete idea of what this looks like. Um, so we now look at- I'm not gonna draw the entire, ah, [NOISE] the grid but I'm gonna center around a particular node that we're interested in, um, sampling. So there's more stuff over here [NOISE]. Okay. So, um, and remember in, in Gibbs sampling, um, at any point in time, all the variables have some sort of preliminary as- assignments. So this might be 1, um, ah, 1, 1, and 0 and this might be 1. Okay. Um, so now I sweep through and I'm gonna pick up this variable, and you're gonna say, [NOISE] shall I try to change its value? First of all, you ignore the old value because it doesn't, [NOISE] ah, factor into the algorithm. And now you're going to consider, um, ah, let's say, this is X_i. So X_i, there's two possible, um, ah, values here. So 0 and 1, right? So, um, and I'm gonna look at the weight. So if it's 0, then I'm gonna evaluate each of these, ah, factors based on that. So remember, the transition potential is, um, um, well, I'm not gonna write it down. But if- let's consider 0 here. So these are different. That means I'm gonna get a 1, ah, these are different, that's gonna be a 1. These are different, that's gonna be a 1, um, these are the same and I'm gonna get a 2. Okay. Now I try 1. Um, these are the same. These are the same. These are the same, and, ah, these two are different. Um, so for every assignment I have this weight, so this is 2, this is 8x, um, and then I, ah, normalize. So this is gonna be 0.8 and this is gonna be 0.2, [NOISE] and I draw, um, ah, flip a coin with heads, ah, 0.8 and whatever I get I put down. So I might have with point a probability I put that 1 back, and so on. Okay. And here's another example which I'm not gonna go through. Okay. So Gibbs sampling is gonna do that. So now let's look at this, ah, um, concrete. Oops. Okay. So. All right. Okay. So what you're looking at here is this grid of pixels. Um, white means that the pixel is unobserved. Black or red means that it's observed to be whatever color, ah, you see on the screen. And, ah, so this is some- somewhat of a noisy image and the goal is to fill in the white pixels so the picture makes sense. And visually, you guys can probably look at this and see the hidden, ah, text. No? [NOISE] Okay. [LAUGHTER] Okay. So your denoising system is pretty good. Um, okay. So I'm gonna run Gibbs sampling, so we click. And what you're seeing here, each iteration, I'm gonna go through every pixel and apply exactly the algorithm of whatever I did on the board. Okay? So you can see that, um, these are just samples from, ah, the set of unobserved random variables. Okay. So to turn this into something more, um, useful, you can instead of looking at a particular sample, you look at, um, the marginal, which is for every location, what is the average pixel value that I've seen so far? Ah, and if you do that, then you can see a little bit clearer picture of, um, 221, and it's not gonna be perfect because the model that we have is fairly simplistic. I always says is similar pixels- neighboring pixels have or tend to have the same, um, ah, color and it has no notion of, you know, letters or something. Okay. But you can see kind of a simple example of, ah, you know, Gibbs sampling our work. Um, there's a bunch of parameters you can play with. Um, you can- here's, ah, another picture of a cat and you can try to-, um, you can play around with the coherence, which is how sticky the, the, ah, transition constraints are, you could try to use ICM, which won't work, um, at all, and so on. Okay. Any questions? I think we might actually end early today. Okay. So, um, just to kind of sum up, we've, last week, Mondays we define new models. So we defined a Bayesian network or a factor graph. And today, we're focusing on the question of how do we do probabilistic inference, and for some set of models I've shown you how to do this. Um, there's a number of algorithms here, um, and a, a forward backward, which, ah, work for HMMs and are exact, particle filtering which works for HMMs although they can be generalized, or which is approximate, and Gibbs sampling which works for general factor graphs which is also approximate. Each of these algorithms we've seen kind of these ideas in previous incarnations. So forward backward is very similar to variable elimination. Um, because it- variable elimination also, ah, computes things exactly. Um, particle filtering is like Beam Search, um, to compute things approximately, and Gibbs sampling is like, um, iterative conditional modes or Gibbs sampling, um, which, ah, we saw from last week. Okay. So, ah, next Monday, we're gonna look at how we do learning. So up until now, the Bayesian network, all the probabilities are fixed and now we're going to actually start to do learning. Um, I should say that maybe this learning sounds a bit scary because there's already a lot of machinery behind inference and factors and all this stuff but you'll be pleasantly surprised that learning is actually much simpler than inference. So stay tuned. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Markov_Decision_Processes_2_Reinforcement_Learning_Stanford_CS221_AI_Autumn_2019.txt | So this lecture is going to be on reinforcement learning. Um, I will, in the interest of time, skip the, the quiz. So, so the way to think about how reinforcement learning fits into what we've done so far is, you remember this class has this picture, right? So we talk about different models and we talk about different algorithms, inference algorithms to be able to predict using these models and answer queries, and then we have learning which is, how do you actually learn these models, right? So every type of model we go through, we have to kind of check the boxes for each of these [NOISE] pieces. So last lecture, we talked about Markov decision processes. This is a kind of a modeling framework, allows you to define models. For example, for crossing volcanoes or playing dice games or tram, taking trams. Um, what about inference? So what do we have here last time? We had value iteration and which allows you to compute the optimal policy and policy, uh, evaluation which eva- allows you to estimate the value of, uh, a particular policy. So these are algorithms that, um, will operate on MDP, right? And we sort of looked at these algorithms last time. So this lecture is gonna be about learning. Uh, I'll just put RL for now. RL is not an algorithm, it's a kind of, uh, refers to the family of algorithms that fits in, uh, this week. Um, but that's the way you should think about it. RL allows you to, um, either explicitly or implicitly estimate MDPs. And then once you have that, you can do all these, um, uh, inference algorithms to, uh, figure, uh, what the optimal policy is. Okay? [NOISE] So just to review. Um, so what is the MDP? Um, the clearest way- remember to think about it is- it's, um, in terms of a graph. So you have a set of states. So in this dice game, we have in and end. So we have a set of states. From every state, you have a set of actions coming out. So in this case, uh, stay and quit. Um, the actions take you to chance nodes, uh, where the- uh, you don't get to control what happens, but nature does and there's randomness. So out of these chance nodes are transitions. Each transition takes you into a state, it has some probability associated with it. So two-thirds in this case. It also has some reward associated with it which you pick up along the way. So naturally, this has to be one-third, four and remember last time, this was probability 1:10. Okay. So, um, and then there is, you know, uh, the discount factor which Gamma, which is a number between 0 and 1 tells you how much you value the future. Uh, for default, you can think about it as 1, uh, for simplicity. Okay. So this is a Markov decision process. Um, and what do you do with one of these things? [NOISE] We, um, have a notion of a policy and a policy, um, [NOISE] see, I'll write it over here. So a policy denoted Pi. Uh, let me use green. Um, so a policy, Pi, uh, is a mapping from states to action. It tells you a policy when you apply it, it says, "When I land here, where should I go? Should I do stay or quit?" If I land, well, I mean this is kind of a simple MDP. Otherwise, there'd usually be more states and for every state, blue circle will tell you where to go. Um, and when you run a policy, uh, what happens? Uh, you get a path, um, which I'm going to call an episode. So what do you do? You start in state S_0, that's- that will be in. In this particular example, um, you take an action a_1, let's say stay. Uh, you get some reward, in this case it will be 4. You end up in a new state, um, oops, S_1. And suppose you go back to end and, uh, then you take another action, maybe it's stay, reward is 4 again and, and so on, right? So this sequence is a path or in RL speak, it's, uh, an episode. Um, let's see. So let me- let me erase this comment. Uh, so this is an episode. Um, and until you hit the end state. Um, and, uh, what happens out of the episode, you can look at a utility. We're gonna denote U which is the discounted sum of rewards along the way, right? So if you, um, you know, stayed three times and then went there, you would have, uh, a utility of 4 plus 4 plus 4 plus 4, so that'll be 16. Okay? So the last lecture, we didn't really work with, um, the episodes into utility, um, because we were able to define a set of recurrences that, uh, computed the expected utility. So, uh, remember that we want to- you know, we don't know what's going to happen. So, uh, there's a distribution, and in order to optimize something, we have to turn it to a number, that's what expectation does. Um, so there's two, uh, concepts that we had from last time. One is the value function of a particular policy. So V_Pi of S is the expected utility if you follow Pi from S. What does that mean? That means, if you take a particular S, let's take, uh, n, and I put you there, and you run the policy, so stay and you traverse this graph, um, you will have different utilities coming out and the average of those is going to be V_Pi of S. Similarly, there's a Q value, um, expect the utility, if you first take an action from a state S and then follow Pi. So what does that mean? That means if I put you on one of these, uh, red chance nodes and you basically play out the game, um, and average the resulting utilities that you get, what number do you get? Okay? [NOISE] Um, and we saw recurrences that related these two. So V_Pi of S is, um, you, recurrence, the name of the game is to kind of delegate to some kind of simpler problems. So you first, uh, look up what you're supposed to do in s, that's Pi S [NOISE] and that takes you to a chance node which is s, Pi S of S, and then you say, "Hey, how much, um, utility am I going to get from that node?" And similarly from the, the chance nodes, you have to look at all the possible successors, the probability of going into that successor, um, of the immediate reward that you get along the edge plus the discounted, um, reward of the kind of a future when you end up in, um, S-prime. Okay. So any questions about this? This is kind of review of, uh, Markov decision processes from, um, last time. Okay. So now we're about to do something different. Okay. So, um, if you say goodbye to the transition and rewards, that's called reinforcement learning. So remember Markov decision processes. I give you everything here and you just have to find the optimal policy. And now, I'm gonna make life difficult by not even telling you, um, what rewards and what are transitions you have to get. Okay. So just to get a, kind of flavor of what that's like. Um, let's play a game. So, um, I'm going to need a volunteer. I'll, I'll give you the game, but this volunteer, you have to have a lot of, uh, grit and, uh, persistence, because this is not gonna be [NOISE] an easy game. You have to be one of those people that even though you're losing a lot, uh, you're still gonna not give up. Okay. So here's how the game works. Um, so for each round, r equals, uh, 1, 2, 3, 4, 5, 6, and so on. You're just going to choose A or B, um, red pill or blue pill, I guess. Um, and you, you move to a new state. So the state is here and you get some rewards which I'm gonna show here. Okay. And the state is 5, 0, that's the initial state. Okay. So everything clear about the rules of the game? [LAUGHTER] That's reinforcement learning, right? [LAUGHTER] We don't know anything about how. Okay. So any volunteers. Um, how about you in the front? Okay. Okay. Okay. Let me, let me fix that. A. A, A, [LAUGHTER] [NOISE] [LAUGHTER] B, B, A, [LAUGHTER] A. It's a MDP, so, uh, in that case that helps. B, B, B, B, B, just infinitely click B with an A, I guess. [LAUGHTER] It's like I'm losing a point every time. I warned you. [LAUGHTER] Okay. A, A, A, A, B, A, A, A, A, A, A. [LAUGHTER] Okay. [APPLAUSE] I'm glad this worked because last time it took a lot longer [LAUGHTER]. Um, but, you know, so what did you have to do? I mean you don't know what to try so you try A and B. And then hopefully you're building an MDP in your head, right? Yeah, right? [LAUGHTER] Okay. Just smile and nod. Um, and you have to figure out how the game works, right? So maybe you noticed that hey, A is, you know, decrementing and B isn't going up but then there's this other bit that gets flipped. So, um, okay you figure this out, and in the process you're also trying to maximize reward which, uh, apparently I guess wasn't - doesn't come until the very end because, um, it's a cruel game. [LAUGHTER]. Okay. So how do we get an algorithm to kind of do this and how do we think about, uh, us doing this? So just to kind of make the contrast between MDPs and reinforcement learning sharper, so Markov decision process is a offline thing, right? So you already have a mental model of how the work- world works. That's the MDP, that's all the rewards and the transitions and the states and actions. And you have to find a policy to collect maximum rewards. You have it all in your head, so you just kind of think really hard about, you know, what is the best thing. It's like "Oh, if I do this action then I'll go here" and, you know, look at the probabilities, take the max of whatever. So reinforcement learning is very different. You don't know how the world works. So you can't just sit there and think because thinking isn't going to help you figure out how the world works. Um, so you have to just go out and perform actions in the world, right? And in doing so you - hopefully you'll learn something but also you'll, um, you'll get some rewards. Okay so-so to maybe formalize the, um, the paradigm of RL. So you can think about it as an agent. That's, uh, that's you. Uh, and do you have the environment, which is everything else that's not an agent. The agent takes actions. So that sends action to the environment and the environment just send you back rewards and a new state. And you keep on doing this. Um, so what you have to do is figure out first of all how to - am I going to act. If I'm in a particular state S_t minus 1, what actions should I choose, okay? So that's one, um, one question. And then you're gonna get this reward and observe a new state. How -what, what should I do to update my mental model of the world, okay? So these are the main two questions. I'm going to talk first about how to update the parameters and then later in the lecture I'm going to come back to how do you actually go and, you know, explore it. Okay. So I'm not going to say much here but, you know, in the context of volcano crossing, um, just to kind of think through things, every time you play the game, right? You're gonna get some utility. So you take -so this is the episode over here. So a r s, you're gonna -sometimes you fall into a pit. Sometimes you go to a hut. Um, and based on these experiences, um, if I didn't -hadn't told you what any of the actions do and what's a slip probability or anything, how would you kind of go about, um, kinda solving this problem? That's a -that's a question. Okay so there's a bunch of algorithms. I think there's gonna be 1, 2, 3, 4. At least four algorithms that we're going to talk about with different characteristics. But they're all going to kind of build onto each other in some way. So first class of algorithms is Monte Carlo methods, right? So, um, okay. So whenever you're doing RL or any sort of learning, uh, the first thing you get is you just have data. Let's, let's suppose that you run even a random policy, you're just gonna -because in the beginning you don't know any better, so you're just going to try random actions and, uh, but in the process you're gonna see "Hey, I tried this action and it led to this reward and so on". So in a concrete example just to make, uh, things a little bit more crisp, it's gonna look something like in, uh, and then you take, uh, you know you did, um, let's see. Let me try to color coordinate this a little bit. Um, so you're in n, you do, um, stay. And then you get a reward of 4 and then you're back in n, you do a stay, and then you get 4 and then maybe you're done, you're out. Okay. So this is an example episode just to make things concrete. So this is s_0, a_1, r_1, s_2, s_1. I keep on incrementing too quickly. Um, a_2, r_2, s_3, okay? Okay so what should you do here? Alright so, um, any ideas? Model-based Monte Carlo. So if you have MDP you would be done. But we don't have MDP, we have data. So what can we do? [NOISE] Yeah. [inaudible]. Yeah. Let's try to build a MDP from that data. Okay. So, um, the key idea is estimate the MDP. Um, so intuitively, we just need to figure out what the transitions and rewards are and then we're done, right? Um, so how do you do the transitions? Um, so the transition says if I'm in state S and I take action A, what will happen? I don't know what will happen, but let's see in the data what will happen. So I can look at the number of times I went into a particular S prime and then divide it over the number of times I attempted any- this action from that state at all and just take the ratio, okay? And for the rewards, um, this is actually fairly, you know, easy, when I - because when I observe a reward, um, from S, A and S prime. I just write it down and say that's the reward, okay? Okay. So on the concrete example what does this look like? So remember now, here's the MDP graph. I don't know what the -the, uh, transition distribution or the rewards are. Um, so let's suppose I get this trajectory. What should I do? So I get stay, stay, stay, stay, and I'm out, okay? So first I, I can write down the rewards of 4 here, and then I can, um, estimate the probability of, you know, transitioning. So three out of four times I went back to in. One out of four times I went to end. So I'm gonna estimate as three-fourths, one-fourths. Okay. But then suppose I get a new data point. So I have stay, stay, end. So what do I do? I can add to these counts, um. So everything is kind of cumulative. So two more times, I'm sorry one more time I went into in and another time I went to end, so this becomes four out of six, three out of six. And suppose I see another time when I just go into end, so I'm just going to increment, uh, this counter and now it's three out of seven and four out of seven, okay? So pretty, um, pretty simple. Okay so for reasons I'm not going to get into, this process actually, you know, converges to the -if you do this kind of, uh, you know, a million times, you'll get pretty, um, accurate. Yeah, question? Yes, the question is, you don't know the rewards or the transitions, uh, but yes you do know the set of, ah, states and the actions. Set of states, I guess, you don't have to know them all in advance, but you just observe them as they come. The actions, you need to know because you- you are an agent and you need to play the game. Yeah, good question. Okay. So, yeah. Does this work with variable costs? Like, there is a probabilit- or variable reward around it. There's a probability you get some rewards for probability [inaudible]. Yeah. So the question is, does this work with variable, uh, rewards. Um, and if the reward is not a function of, um, sas prime, you would just take the average of the rewards that you see. Yeah. Okay. So- so what do you do with this? So after you estimate the MDP, so all you need is the transitions and rewards. Um, then now we have MDP. It might- it may not be the exact right MDP because this is estimated from data so it's not gonna match it exactly, um, but nonetheless, we already have these tools from last time. You can do value iteration to compute, um, the optimal policy on it and then you just, you know, you're done, you run it. On- in practice, you would probably kind of interleave the learning and the- the optimization but, uh, for simplicity we can think about it as a two-stage where you gather a bunch of data, you estimate the MDP and then you are off. Okay. There's one problem here. Does anyone know what the problem might be? You can actually see it by looking on the slide. Yeah. Well, with your based policy of all this thing, you'll never explore the quick branch of the world. Yeah, yeah. You didn't explore this at all, so you actually don't know how much reward is here. Maybe it's like, uh, you know, 100, right? So- so this is this problem, this kind of actually a pretty big problem that unless you have a policy that, uh, actually goes and covers all the- the states, you just won't know, right? And this is kind of natural because there can always be, you know, a lot of reward hiding under a kind of one state but unless you see it you- you don't- you just don't know. Um, okay. So this is a kind of key idea, key challenge I would say, in reinforcement learning is exploration. So you need to be able to explore, um, the state space. This is different from normal machine learning where data just comes in passively and you learn on your nice function and then you're- you're done. Here, you actually have to figure out how to get the data, and that's- that's kind of one of the, the key challenges of RL. So we're gonna go back to this- this problem, and I'm not really gonna, uh, try to solve it now. Um, for now you can just think about Pi as a random policy because a random policy eventually will just, you know, hit everything for, you know, finite, uh, small, uh, state spaces. Okay. So, um, okay. So that's basically end of the first algorithm. Let me just write this over here. So algorithms, we have model-based, um, Monte Carlo. And the model-based is referring to the fact that we're estimating a model the- in particular the MDP. The Monte Carlo part is just referring to the fact that we're using samples, uh, to estimate, um, a model or you're basically applying a policy multiple times and then estimating, uh, the model based on averages. Okay. So- so now, I'm going to present a- a different algorithm and it's called, uh, model-free Monte Carlo. And you might from the name guess what we might want to do is maybe we don't have to estimate this model, okay? And why- why is that? Well, what do we do with this model? Um, what we did was we, you know, uh, presumably use value iteration to, um, you know, compute the optimal policy. And the- remember this, uh, recurrence, um, for computing Q_opt, um, it's in terms of T and reward, but at the end of the day all you need is Q_opt. If I told you, um, Q_opt (s, a) which is, um, what is Q_opt (s, a)? It's the, um, the maximum possible utility I could get if I'm in, chance node sa and I follow the optimal policy. So clearly if I knew that, then I would just produce the optimal policy and I'd be done, I don't even need to know- understand the- the rewards and transitions. Okay. So with that, uh, insight is model-free learning, which is that we're just going to try to estimate Q_opt, um, you know, directly. Um, sometimes it can be a little bit confusing what is meant by model-free. So Q_opt itself you can think about as a- as a model, but in the context of MDPs in reinforcement learning, generally people when they say model-free refers to the fact that there's no MDP model, not that there is no, um, model in general. Okay. So, um, so we're not gonna get to Q_opt, uh, yet. Um, that will come later in the lecture. So let's warm up a little bit. Um, so here's our data staring at us. Um, remember- let's, let's look at a related quantity, so Q Pi. Remember what Q Pi is. Q Pi (s, a) is an expected utility if we start at s and you first take action a and then follow policy Pi, right? So in, um, in- I guess another way to write this is, um, if you are at a particular, uh, time step t, you can define u_t as the- the discounted sum of the rewards from that point on, which is, you know, the reward immediately that you will get plus the discounted part in the non- next time step plus, you know, a square discounted and then, uh, two time steps in the future and so on. And, um, what you can do is you can try to estimate Q Pi from this utility. Right? So this is the utility, uh, that you get out to predict your time steps. So suppose you do the following. So suppose you average the utilities that you get only on the time steps where I was in a particular state s and I took an action a. Okay. So you have a- let's suppose you have a bunch of episodes, right? So, um, here pictorially, um, uh, let's see. [NOISE] Here's another way to think about it. So I get a bunch of episodes. I'm gonna do- do some abstract, um, drawing here. Um, so every time you have you know, s, a shows up here, maybe it shows up here, maybe it shows up here, maybe it shows up here, you're going to look at how much reward do I get from that point on? How much reward do I get from here on? How much reward do I get from here on? And, um, average them, right? So there's a kind of, a technicality which is that if s, a appears here and it also appears, uh, after it then I'm not going to count that because I'm kind of- if I do both I'm kind of double counting. Um, in fact it works both ways, but just, conceptually it's easier to think about just taking of, uh, an s, a, uh, of the same you don't kind of go back to the same position. Okay, so let's do that on a concrete example. So Q-pi, let's just write it. Q-pi s, a is a thing where we're trying to estimate and this is, uh, a value associated with every chance node s, a. So in particular, I've drawn it here. I need a value here and, uh, a value here. Okay? So suppose I get some data, I stay and then I got- go to the end. Uh, so what's my utility here? It's not a trick question. 4. 4, yes. Um, sum of 4 is 4. Okay, so now I can say, "Okay it's 4." And that's my best guess so far. I mean, I haven't seen anything else, maybe it's 4. Um, so what happens if I play the game again and I get 4, 4? So what's the utility here? 8. 8? So then I update this to the average of 4 and 8, do it again, I get 16 then I average, uh, in the 16. Okay? And, um, and again, you know, I'm using stays so I don't learn anything about this, in practice you would actually go explore this and figure out how much utility you're seeing there. So in particular, notice I'm not updating the rewards nor the transitions because I'm model-free, I just care about the Q values that I get which are the values that sit at the nodes not on the edges. Okay, so one caveat is that we are estimating Q-pi not Q-opt. We'll revisit this, um, later. Um, and another, uh, thing to kind of note is the difference between what is called On-policy and Off-policy. Okay? So in reinforcement learning, you're always following some policy to get around the world right? Um, and that's generally called the exploration-policy or the control policy um, and then there's usually some other thing that you're trying to estimate, usually the- the value of a particular policy and that policy could be the same or it could be different. So On-policy means that, uh, we're estimating the value of the policy that we're following, the data-generating policy. Off-policy means that we're not. Okay? So um, so in particular is, uh, model-free Monte Carlo, um, On-policy or Off-policy? It's On-policy because I'm estimating Q-pi not Q-opt. Okay? That's On-policy. Um, and Off-policy , uh, what about model-based Monte Carlo? [NOISE] I mean it's a little bit of a slightly weird question, but in model-based Monte Carlo, we're following some policy, maybe even a random policy, but we're estimating the transition then rewards, and from that we can compute the- the optimal policy. So you can- you can think about is, um, Off-policy but, you know, that's maybe not, uh, completely standard. Okay. So any questions about what model-free Monte Carlo is doing? So let me just actually write. So what is model-based Monte Carlo is doing, it's trying to estimate the, uh, the transition and rewards and model-free Monte Carlo is trying to estimate, uh, the, um, Q-pi. Um, okay? And just as- as a note, I put Hats on, uh, any letter that is supposed to be a quantity that is estimated from data and that's what, you know, I guess statisticians do, um, to differentiate them between whenever I Q-pi, that's the true, uh, value of that, you know, policy which, you know, I don't have. Okay, any questions about model-free Monte Carlo? Both of these algorithms are pretty simple, right? You just, you know, you look at the data and you take averages. Yeah. So model free is not trying to optimize [inaudible] policy. So the question is is model-free, uh, making changes to a policy or is it a fixed policy? So- so this version I've given you is only for a fixed policy. The general idea of model-free as we'll see later, uh, you can also optimize the policy. Okay. So- so now what we're gonna do is we're gonna, uh, do theme and variations on, uh, model-free Monte Carlo. Actually where it's going to be the same algorithm but I just wanted to interpret it in kind of slightly different ways that'll help us, um, generalize it in the future. Yeah. Are there certain problems where model-free does better than model base? Are there certain problems where model-free is better than model base? So this is actually a really interesting question, right? So, um, you can show that if your model is correct, if your model of the world is correct, model-based is kind of the way to go because there'll be more sample efficient, meaning that you need fewer, uh, data points. But it's really hard to get the model correct in the real world. So recently, especially with, you know, deep reinforcement learning, people have gone a lot of mileage by just going model-free because then, um, jumping ahead a little bit, you can model this as a kind of a deep neural network and that gives you extraordinary flexibility and power without having to solve the hard problem of, you know, constructing the MDP. Okay. So- so there's kind of three ways you can think about this. So the first, we already talked about it, is, you know, this average idea. So we're just looking at the utilities that you see whenever you encounter an s and a, and you just average them. Okay. So here is an equivalent formulation. Um, and the way it works is that for every, um, s, a, u that you see, so every time you see a particular s, a, u, s, a, u, s, a, u and so on, I am going to perform the following update on. So I'm gonna take my existing value and I'm going to do a- what- what we call a convex combination. So, you know, 1 minus eta and eta sum to 1. So it's, you know, a kind of balancing between two things. Balancing between the old value that I had and the- the new utility that I saw. Okay? And the eta is set to be 1 over 1 plus the number of updates. Okay? So let me do a concrete example. I think you'll make this very clear what's- what's going on. So suppose my data looks like this. So I get, uh, 4, um, and then a 1 and a 1. Um, so these are the utilities, right? That's- that's a U here. I'm ignoring the s and a, I'm just assume that there are some- something. Okay, so first, uh, let's assume that Q-pi is 0, okay? So the first time I do, um, uh, let's see, number of updates, I haven't done anything so it's 1, um, 1 minus 0. So 0 times 0 plus 1 times 4 which is the first view that comes in. Um, okay, so this is 4, okay? So then what about the next data point that comes in? So I'm gonna to take, um, one-half now times 4 plus one-half times 1, which is the new value that comes in. And that is, I'm gonna to write it as 4 plus 1 over 2, okay? So now- okay just to keep track of things, this results in this, this results in this, and then now, um, I'm running out of space but hopefully we can- so now on the third one, I do, um, uh, two-thirds, so I have 4 plus 1 over 2 times two-thirds plus, um, actually I- I guess I should do two-thirds to be consistent. Two-thirds times 4 plus 1 over 2 which is the previous value that's sitting in Q-pi plus one-third times 1, which is a new value, and that gives me, um, 4 plus 1 plus 1 over 3, right? So you can see what's going on here is that, you know, each, uh, each time I have this, you know, sum over all the tools I've seen over the number of times it occurs and this eta is set so that next time I kind of cancel out the old uh, count and I add the new count to the denominator and it kind of all works out so that at every time-step what actually is in Q-pi is just a plain average over all of the numbers I've seen before. All right, this is just kind of an algebraic trick to, um, get this original formulation, which is a notion of average, into this formulation which is a notion of, um, kind of you're trying to, um, take a little bit of the old thing and add a little bit of a new thing. Okay. So [NOISE], um, I guess I'm going to call this, uh, I guess, um, combination I guess. So the- that's the second interpretation. There's a third interpretation here which, uh, you can think about is, uh, in terms of stochastic gradient descent. So this is actually a kind of a, uh, simple algebraic manipulation. So if you look at this expression, what is this? So you have 1 times Q Pi, so I'm gonna pull it out and put it down here and then I'm gonna have minus eta times Q Pi, that's this thing and then I also have a eta, a u, so I'm going to put kind of minus a- u here and this is, uh, inside this parenthesis. So if you just, you know, do the algebra you can see that these two, you know, are equivalent. Uh, so what's the point of this? Right, so, um, where have you kind of seen this, uh, before, something like, maybe not, not this exact expression but something like that [NOISE]. Any ideas? Yeah, when you look down at a stochastic gradient descent in the context of, uh, the square loss for linear regression. Right, so remember, uh, we had these updates that all looked like kind of prediction minus target which was, you know, the residual and that was used to kind of update. So one way to interpret this is, uh, this is kind of implicitly trying to do stochastic gradient descent on the objective which is a squared, uh, loss on, uh, the, the Q Pi value that you, you, you're trying to set and, uh, u which is the new piece of data that you got. So think about in regression this is the y, this is, uh, y, you know, the- what the output is and you- this is the model that's trying to predict it and you want those to be close to each other. Okay? So, so those are kind of three views on basically, uh, this idea of averaging or incremental updates. Okay. So it'll become clear why, you know, I, I did this isn't just to, you know, have fun. Uh, okay. So now let's, uh, see an example of model- free Monte Carlo in action on this, ah, the volcano games. So remember here we have this, uh, you know, volcanic example and, uh, I'm going to, uh, set the number of episodes to let's say 1,000, let's see what happens. Uh, so here, okay. So what does this kind of, uh, uh, grid-like structure, a grid of triangles denote? So this remember is a state, this is 2, 1. So what I am doing here is dividing into four pieces which correspond to the four different action, so this triangle is 2, 1 north, this triangle is 2, 1 east and so on. Okay. And a number here is the Q Pi or value that I'm estimating along the way. Okay, so the, the policy I'm using, uh, is a complete random, uh, just move randomly, uh, and I run this 1,000 times and we see that the average utility is, uh you know, minus 18 which is, uh, obviously not great. Okay. Uh, but this is an estimate of how well the random policy is doing. So, you know, as advertised, you know, random policy you would expect to fall into a volcano quite often. Uh, okay. Uh, and you can run this and sometimes you get slightly different results but, you know, it's pretty much stable around minus 19, minus 18. Okay. Any questions about this before we move on to, uh, different algorithms? Okay. So model-based Monte Carlo we're estimating the MDP, model-free Monte Carlo we're just estimating the Q values of a particular policy for now. Okay. So, so let's revisit what model-free Monte Carlo is doing. So if you use the policy Pi equals stay for the dice game, um, you know, you might get a bunch of different, uh, trajectories that come out. These are possible episodes and in each episode you have a utility, you know, associated with it. Uh, and what model free Monte Carlo is doing is it's using these utilities, uh, to kind of update, uh, towards, uh, update u Q Pi. Right, so in particular like for example this you're saying, okay, I'm in, I'm in, uh, the in-state and I, you know, take an action and stay, when you're- what will happen? Well, in this case I got, you know, 16 and, uh, this case I've got 12. And notice that there's quite a bit of variance. So on average, this actually does the right thing. Right? So, um, just by definition, this is our unbiased, you know, estimate, if you do this a million times and average you're just going to get the right value which is, uh, 12 in this case. But the variance is here, so if you, for example if you only do this a few times, you're not going to get 12, you might get something, you know, sort of related. Uh, so how can we kind of counteract, uh, this, this variance? So the key idea, uh, behind what we're going to call bootstrapping is, is that, you know, we actually have, you know, some more information here. So we have this Q Pi that we're estimating along the way. Right? So, so this view is saying, okay, we're trying to estimate Q Pi, um, and then we're going to try to basically regress it against, you know, this data that we're seeing but, you know, can we actually use Q Pi itself to, uh, help, you know, reduce the variance? So, so the idea here is, uh, um, I'm going to look at all the cases where, you know, I started in and I take stay, I get a 4. Okay? So I'm going to say, I get a 4 but then after that point I'm actually just going to substitute this 11 in. Okay? This is kind of weird, right, because normally I would just see, okay, what would happen? But what happens is kind of random. On average it's going to be right but, you know, on any given case, I'm gonna get, like, you know, 24 or something. And the, the hope here is that by using my current estimate which isn't going to be right because if I were, if it were right I would be done but hopefully it's kind of somewhat right and that will, you know, be, you know, better than using the, the kind of the raw, rollout value. Yeah, question. You, you would update your current estimate at the end of each episode, correct? Uh, yeah. So the question is, would you update the current estimate, um, after each episode? Yeah. So all of these algorithms, I haven't been explicit about it, is that you've seen an episode, you update, uh, after you see it and then you get a new episode and so on. Yeah. Sometimes you would even update before you're done with the episode, uh. [NOISE] Okay. So, uh, let me show this, uh, what, um, this algorithm. So this is a new algorithm, it's called SARSA. Does anyone know why it's called SARSA? [inaudible]. Oh, yeah, right. So if you look at this, it's spelled SARSA and that's literally the reason why it's called SARSA. Uh, so what does this algorithm say? So you're in a state s, you took action a, you got a reward, and then you ended up in state s prime and then you took another action a prime. So for every kind of quintuple that you see, you're going to perform this update. Okay, so what is this update doing? So this is the convex combination, uh, remember that we saw from before, um, where you take a part of the old value and then you, uh, try to merge them with the new value. So what is the new value here? This is looking at just the immediate reward, not the full utility, just the immediate reward which is this 4 here and you're adding the discount which is 1 for now, um, of your estimate. And remember, what is the estimate trying to do? Estimate is trying to be the expectation of rewards that you will get in the future. So if this were actually a q pi and not a q pi hat, then this will actually just be strictly better because that would be, uh, just reducing the variance. Uh, but, you know, of course this is not exactly right, there's bias so it's 11, not 12 but the hope is that, you know, this is not biased by, you know, too much. Okay? So these would be the kind of the, the values that you will be updating rather than these kind of raw values here. Okay. So just to kind of compare them, well, okay. Okay, any questions about what SARSA is doing before we move on? So maybe I'll write something to try to be helpful here. So Q pi model-free Monte Carlo estimates Q pi based on u, and SARSA is still Q pi hat, but it's based on reward plus, uh, essentially Q pi hat. I mean this is not like a valid expression, but hopefully, it's some symbols that will evoke, uh, the right memories, um, okay? So let's discuss, um, the differences. So this is- this- whenever people say, kind of, bootstrapping, um, in the context of reinforcement learning, this is kinda what they mean, is that instead of using u as its prediction target, you're using r plus Q pi, and this is kind of you're pulling up yourself from your bootstraps because you're trying to estimate q pi, but you don't know q pi, but you're using Q pi to estimate it. Okay. So u is based on one path, um, er, in SARSA, you're based on the estimate which is based on all your previous kind of experiences, um, which means that this is unbiased, uh, model for your Monte Carlo is biased, but SARSA is biased. Monte Carlo has large variance. SARSA has, you know, smaller variance. Um, and one, I guess, uh, consequence of the way the algorithm is set up is that model-free Monte Carlo, you have to kind of roll out the entire game. Basically, play the game or the MDP until you reach the terminal state, and then you can- now you have your u to update, whereas, uh, SARSA when- or any sort of bootstrapping algorithm, you can just immediately update because all you need to do is you need to see, this is like a very local window of S-A-R-S-A, and then you can just update, and that can happen, kind of, you know, anywhere. You don't have to wait until the very end to get the value. Okay. So just as a quick sanity check. Um, which of the following algorithms allows you to estimate Q opt, so model-based Monte Carlo, model-free Monte Carlo, or SARSA? Okay. So I'll give you maybe ten seconds to ponder this. [NOISE] Okay? How many of you more- need more time? Okay. Let's, uh, get a report. I think I didn't reset it from last year, so this includes last year's, uh, participants. Um, so model-based Monte Carlo, uh, allows you to get Q opt, right? Because once you have the MDP, you can get whatever you want. You can get Q opt. Model-free Monte Carlo, um, estimates Q Pi; it doesn't estimate Q opt and, um, SARSA also estimates Q Pi, but it doesn't estimate Q opt, okay? All right. So, so that's, uh, kind of a problem. I mean, these algorithms are fine for, uh, estimating the value of a policy, um, but you really want the optimal policy, right? In fact, these can be used to improve the policy as well because you can, um, do something called policy improvement, which I didn't talk about. Once you have the Q values, you can define a new policy based on the Q values. Um, but there's actually a kind of a more direct way to do this, okay? So, so here's the kind of the way mental framework you should have in your head. So there's two values: Q Pi and Q opt. So in MDPs, we saw that policy evaluation allows you to get Q Pi; value iteration get- allows you to get Q opt. And now, we're doing reinforcement learning, and we saw model-free Monte Carlo and SARSA allow you to get Q Pi. And now we need, I'm going to show you a new algorithm called Q-learning, that allows you to get Q opt. So this gives you Q opt, and it's based on reward, uh, plus, uh, Q opt, kind of. Okay. So this is going to be very similar to SARSA, and it's only going to differ by, essentially, as you might guess, the same difference between policy evaluation and value iteration. Okay. So it's helpful to go back to kind of the MDP recurrences. So even though MDP recurrences can only apply when you know the MDP. For deriving reinforcement learning algorithms, um, it's- they can kind of give you inspiration for the actual algorithm. Okay. So remember Q opt, what is a Q opt? Q opt is considering all possible successors of probability immediate reward plus, uh, future, um, returns. Okay. So the Q-learning is, it's actually a really kind of clever idea, um, and it's- it could also be called SARS, SARS, I guess, um, but maybe you don't want to call it that, and what it does is as follows. So this has the same form, the convex combination of the old, uh, value, uh, and the new value, right? So what is the new value? Um, so if you look at Q opt, Q opt is looking at different successors reward plus V opt. What we're gonna do is, well, we don't have all- we're not gonna be able to sum over all our successors because we're in our reinforcement learning setting, and we only saw one particular successor. So let's just use that as a successor. So on that successor, we're going to get the reward. So R is a stand-in for the actual reward of, I mean, is the stand-in for the reward, the reward function, and then you have Gamma times. And then V opt, I am going to replace it with, uh, the, our estimate of what V opt is, and what should the estimate of V opt be? So what relates V opt to Q opt? Yeah? I think the a that maximizes Q opt but [inaudible] V opt. Yeah. Exactly. So if you, define V opt to be the max over all possible actions of Q opt of s in that particular action, then this is V opt, right? So Q is saying, I'm at a chance node, um, how much, what is the optimal utility I can get provided I took an action? Clearly, the best thing to do if you're at a state is just choose the action that gives you the maximum of Q value that you get into, okay? So that's just Q-learning, so let's put it side-by-side with SARSA. Okay. So SARSA, these two are very similar, right? So SARSA, remember updates against r plus Q Pi? And now we're updating against r plus this max over Q opt, okay? And you can see that SARSA requires knowing what action I'm gonna take next, um, kind of a one-step look ahead, a prime and that plugs i- into here, whereas Q-learning, it doesn't matter what a you took because I'm just gonna take the one that maximizes, right? So you can see why SARSA is estimating the value of policy because, you know, what a prime, uh, shows up here is a function of a policy. And here, um, I'm kind of insulated from that because I'm just taking the maximum over all actions. This is the same intuition as for value iteration versus policy evaluation, okay? I'll pause here. Any questions? Q-learning versus SARSA. So is Q-learning on-policy or off-policy? It's off-policy because I'm following whatever policy I'm following, and I get to estimate the value of the optimal policy which is probably not the one I'm following, at least, in the beginning. Okay. So let's look at the example here. So here's SARSA and run it for 1,000 iterations. And like model-free Monte Carlo, um, this, um, I'm estimated that an average- the average utility I'm getting is minus 20, and in particular, the values I'm getting are all very negative because this is Q Pi. This is a policy I'm following, which is the random policy. Um, if I replace this with q, what happens? So first, notice that the average utility is still minus 19 because I actually haven't changed my exploration policy. I'm still doing random exploration. Um, well, yeah. I'm still doing random exploration. But notice that the value, the Q opt values are all around, you know, 20, right? And this is because the optimum policy, remember, is just to- and this is, uh, slip probability is 0. So optimal policy is just to go down here and get your 20, okay? And Q- and I- I guess it's kind of interesting that Q-learning, I'm just blindly following the policy running, you know, off, off the cliff into the volcano all the time but, you know, I'm learning something, and I'm learning how to behave optimally, even though I'm not behaving optimally, and that's, uh, the kind of hallmark of off-policy learning. Okay. So, any questions about these four algorithms? So model-based Monte Carlo, estimate MDP, model-free Monte Carlo, um, estimate, ah, the Q value of this policy based on, um, the actual returns that you get, the actual sum of the, ah, rewards. SARSA is bootstrapping estimating the same thing but with kind of a one-step look ahead. And Q learning is like SARSA except for I'm estimating the optimal instead of, um, fixed policy Pi. Yeah. Is SARSA on-policy or off policy? SARSA is on-policy because I'm estimating Q Pi. All right. Okay so now let's talk about encountering the unknown. So these are the algorithms. So at this point if I just hand you some data, um, if I told you here's a fixed policy, here's some data, you can actually estimate all these quantities. Um, but now there's a question of exploration which we saw was really important, because if you don't even, even see all the states, how can you possibly act optimally? So, um, so which exploration policy should you use? So here are kind of two extremes. So the first extreme is, um, let's just set the exploration policy. So, so imagine we're doing Q learning now. So you have this Q_opt estimate. So it's not a true Q_opt but you have an estimate of Q_opt. Um, the naive thing to do is just take a- use that Q_opt, figure out which action is best and just always do that action. Okay. So what happens when you do this is, um, you, ah, don't do very well. So why don't you do very well? Because initially while you explore randomly and soon you find the 2. And once you've found that 2, you say, "Ah, well, 2 is better than 0, 0, 0. So I'm just gonna keep on going down to the 2 which is you know, all exploitation, no exploration. Right? You don't realize you that there's all this other stuff over here. Um, so in the other direction, we have no exploitation, all exploration. Um, here, ah, you kind of have the opposite setup where I'm, I'm running Q learning, right? So as we saw before, I'm actually able to estimate the, uh, the, the Q_opt values. So I learn a lot. But the average utility which is the actual utility I'm getting by playing this game is pretty bad. In particular, it's the, the utility you get from just, you know, moving randomly. So kinda what you really want to do is, uh, balance you know, exploration and exploitation. So just kind of a, kind of an aside or a commentary is that I really feel reinforcement learning kind of captures, ah, life pretty well. Um, uh, because in life there's, you know, you don't know what's going on. Um, you want to get rewards, you know, you want to do well. Um, and, ah, but at the same, time you have to, um, kind of learn about how the world works so that you can kind of improve your policy. So if you think about going to in restaurants or finding the shortest path better way to get to, um, to school or to work, or in research even when you are trying to figure out, um, a problem you can work on the thing that you know how to do and will definitely work or, you know, do you try to do something new in hopes of you learning something but maybe it won't get you as high reward. So, um, hopefully reinforcement learning is, um, I know, it's kind of a metaphor for life in the US. Um, okay so, ah, back to concrete stuff. Um, so here's one way you can balance, um, exploration and exploitation, right? So it's called the Epsilon-greedy policy. And this assumes that you're doing something like Q learning. So you have these Q_opt values and ideas that, you know, with probability of 1 minus Epsilon where Epsilon is, you know, let's say like 0.1, you're usually gonna give exploit. We're just gonna do, give you- give it all you have. Um, and then, um, once in a while, you're also gonna do something random. Okay. So this is actually not a bad policy to act in life. So once in a while, maybe you should just do something random and kind of see what happens. Um, so if you do this, um, what, what do you get? Okay, so what I've done here is, uh, I've set Epsilon to be starting with one. So one is, ah, all exploration. And then I'm going to change the value, ah, a third of the way into 0.5. And then I'm gonna, two-thirds the way I'm gonna change it to 0. Okay. So if I do this then I actually estimate the values, ah, really really well. Um, and also I get utility which is, you know, pretty good, you know 32. Um, okay. And this is also kind of something that happens, uh, as you get older, you tend to, um, [NOISE] explore less and exploit more. Um, it just happens. Um, okay. All right. So that was exploration. So let's put some stuff on the board here. Um, do I need this anymore? Maybe [NOISE]. Okay. Um, okay. So covering the unknown, so we talked about, you know, exploration, um, you know, Epsilon-greedy. Um, and there's other ways to do this. Um, Epsilon-greedy is just kind of the simplest thing that actually, you know, works remarkably, you know, well, um, even in the stabilized systems. So the other problem now I'm gonna talk about is, you know, generalization. Uh, so remember when we say exploration. Well, if you don't see a particular state, then you don't know what to do with this. I mean you think about it for a moment, that's kind of unreasonable because, you know, in life you're never gonna be in the exact same, you know, situation. And yet we are [NOISE] we need to be able to act properly right. So general problem is that a state-space that you, you might deal with in a kind of a real, ah, world situation is enormous. And there's no way you're going to go and track down every possible state. Okay. So this state space is actually not that enormous, um, but this is the biggest state space I could draw on the- on the screen. Um, and you can see that this, you know, the average utility is, you know, pretty bad here. Okay. So what can we do about this? So, um, I guess let's talk about a large state space. So this is the problem. So now this is where the second- the third interpretation of model-free Monte Carlo will come in handy. So let's take a look at Q learning. Okay. So in the context of, ah, SGD, looks like this. Right. So it's a kind of a gradient step where you take the old value and you minus eta and something that kind of looks like, ah, it could be a gradient, which is the residual here. Um, so one thing to note is that under the, the kind of formulations of Q learning that I've talked about so far, this is what we call a kind of rote learning. Right. Um, which if we were, you know, two weeks ago, we already said this is, you know, kind of ridiculous because it's, uh, not really learning or generalizing at all. Um, right now it's basically for every single state and action I have a value. If I have a different state and action, completely different value. I don't- I don't- there's no kind of, ah, sharing of information. And naturally, if I do that, I can't generalize between states and actions. Um, okay. So here's the key idea that will allow us to, um, actually overcome this. So it's called function approximation in the context of reinforcement learning. Uh, in normal machine learning, it's just called normal machine learning. Um, so the way it works is this, uh, so we're going to define this Q_opt s, a. It's not going to be a lookup table, it's going to depend on some parameters here w. And I'm gonna define this function to be w dot Phi s, a. Okay. So I'm gonna define this feature vector very similar to how we did it in kind of machine- in the machine learning section except for instead of s, a we had x. And now the weights are going to be kind of, you know, the same. Okay. So what kind of features might you have? Ah, you might have for example, um, features on, you know, actions. So these are indicator features that say, "Hey, maybe it's better to go east then to go west or maybe it's better to be in the fifth, ah, row or as it's good to be in a six column and, you know, things like that." So, um, you have a smaller set of features and you try to use that to kind of generalize across all the different states that you might see. So what this looks like is now with the features is actually the same as before except for, um, now we have something that really looks like, uh, you know, the machine learning lectures, is that you take your weight vector and you do, um, an update of the residual times the feature vector. Okay. So how many of you this looks familiar from linear regression? Okay. All right. So, so just to contrast, so before we were just updating the Q_opt values, um, but the residual is exactly the same and there's nothing over here. And now what we're doing is we're updating not the Q values, we're updating the weights. The residual is the same and the thing that connects the, the, the Q values with the, the residual width, the, the weights is, ah, the kind of the feature vector. Okay. As a sanity check, this has the same dimension. This is a vector. This is a scalar. This is a vector which has the same dimensionality s, a; w. Okay. And if you want to derive this, um, you can actually think about the implied objective function as, ah, simply, you know, linear regression. You have a model that's trying to predict a value, um, from an input, um, s, a. So s, a is like x and Q_opt is like kind of y. And then your regre- sorry. This target is like, uh, the y that you're trying to predict and you're just trying to make this prediction close to the target. Yeah, question. Is the eta, you said that [inaudible] [NOISE] Yeah. So a good question. So what is this eta now? Uh, is it the same as before? So when we first started talking about these algorithms, eta was supposed to be one over the number of updates and so on. But once you get into the SGD form like this then now this just behaves as a step size and you can tune it to your heart's content. All right. So that's all I will say about these two challenges. One is how do you do exploration? You can use Epsilon-greedy which allows you to kind of balance exploration with exploitation and then the second thing is that for large state spaces, Epsilon-greedy isn't going to cut it because you're not going to see all the states even if you try really hard and you need something like function approximation to tell you about new states that you fundamentally haven't seen before. Okay. So summary so far, online learning. We're in an online setting. This is the game of reinforcement learning. You have to learn and take actions in real world. One of the key challenges is the exploration-exploitation trade-off. We saw, um, four algorithms, there's kind of two key ideas here. One is Monte Carlo which is that from data alone, you can basically use averages to estimate quantities that you care about, for example, transitions, rewards, and Q values. And the second key idea is this bootstrapping which shows up in SARSA and Q-learning which is that you're updating towards a target that depends on your estimate of what you're trying to predict. Um, not just the kind of raw data that you see. Okay. So now I'm gonna maybe step back a little bit and talk about reinforcement learning in the context of some kinda other things. So there's kind of two things that happen when we went from binary classification which was two weeks ago to reinforcement learning now and it's worth kind of decoupling these two things. One is state and one is feedback. So the idea about partial feedback is that you can only learn about actions you take. Right. I mean this is kinda obvious in reinforcement learning. If you don't, don't, quit in this game, you never know how much money you'll get. And the other idea is the notion of state which is that new rewards depend on your previous actions. So if you're going through a volcano, you have to, ah, there's a kind of a different situation depending on where you are in, in the map. Um, and there's actually kind of- so, so this is kind of you can draw a two-by-two grid where you go from supervised learning which is stateless and full feedback. So there is no state, every iteration you just get a new example, ah, and that doesn't have, you know, there's no dependency and in terms of prediction on the previous examples. Um, and full feedback in because in supervised learning, you're told which is the correct label. Even if there might be 1,000 labels for example in image classification, you're just told which ones are the correct label. Ah, and now in reinforcement learning, both of those are made harder. There is two other interesting points. So what is called multi-armed bandits is kind of a, you can think about as a warm up to reinforcement learning where there's partial feedback, but there's no state which makes it easier. And there's also, you can get full feedback but there are states. So instruction prediction. For example in machine translation, you're told what the translation output should be, but clearly though actions depend on previous actions because, you know, you can't just translate words in isolation essentially. Um, okay, So one of the things I'll just mention very briefly is, you know, this is deep reinforcement learning has been very popular in recent years. So reinforcement learning, there was kind of a lot of interest in the kind of '90s where a lot of the algorithms were kind of, ah, in theory were kind of developed. And then there was a period where kind of not that much, not as much happened and since I guess 2013, there has been a revival of reinforcement of research. A lot of it's due to I guess at the DeepMind where they published a paper showing how they can do- use raw reinforced learning to play Atari. So this will be talked about more in a section this Friday. But the basic idea of deep reinforcement learning just to kind of demystify things is that you are using a neural network for Q_opt. Essentially that's what it is. And there's also a lot of tricks to make this kind of work which are necessary when you're dealing with enormous state spaces. So one of the things that's different about deep reinforcement learning is that people are much more ambitious about handling problems where the state spaces are kinda enormous. So for this, the state is just the, you know, the pixels, right, so there's, you know, a huge number of pixels and whereas before people were kind of in what is known as a tabular case which the number of states you can kind of enumerate. So, um, there's a lot of details here to care about. One general comment is that reinforcement learning is, it's really hard, right, because of the statefulness and also the delayed feedback. So just when you're maybe thinking about final projects, I mean, it's a really cool area, but don't underestimate how much work and compute you need to do. Some other things I won't have time to talk about is so far we've talked about methods that are trying to estimate the Q function. There's also a way to even do without the Q function and just try to estimate the policy directly that's called, um, methods like policy gradient. There's also methods like actor critic that try to combine of these value based methods and policy-based methods. These are used in DeepMind's AlphaGo, and AlphaZero programs for crushing humans at Go. This will actually will be deferred to next week's section because this is in the context of games. There's a bunch of other applications. You can fly helicopters, play backgammon, this is actually one of the early examples TD-Gammon was one of the early examples in the early '90s of kind of one of the success stories of using reinforcement learning in particular, you know, self play. For non-games, reinforcement learning can be used to kind of do elevator scheduling and managing data centers and so on. Okay. So that concludes this section on Markov decision processes which we- the idea is we are playing against nature. So nature is kinda random but kind of neutral. Next time, we're going to play against an opponent where they're out to get us. So we'll see about that. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Machine_Learning_3_Generalization_Kmeans_Stanford_CS221_AI_Autumn_2019.txt | Homework 1, hard deadline, uh, section. Tomorrow, um, we're gonna go through the backpropagation example which I went through very briefly in the last lecture. Talk about diverse neighbors, which I did in one minute. And also, we're gonna talk about scikit-learn, which is- it is a really useful tool for doing machine learning which might be useful for your final projects, good to know. So, uh, please come to section. All right, let's, ah, do a little bit of review of where we are. So we've talked about, ah, we're talking about machine learning in particular supervised learning where we start with feature extraction where we take examples and convert them into a feature vector, um, which is more amenable for our learning algorithm. Um, we can take either linear predictors or neural networks which gives us scores, um, and the score is either defined via a simple dot product between the weight vector and the feature vector, or some sort of more fancy non-linear combination. At the end of the day, we have these model families that gives us you know, score functions which then can be used for classification regression. We also talked about loss functions as a way to assess the quality of a particular predictor. So in, ah, linear classification, ah, we had the zero, one loss and the hinge loss as example of loss functions that we, ah, might care about. Ah, the training loss is an average over the losses on individual examples. And to optimize all this, ah, we can use, ah, the stochastic gradient algorithm which takes an example x, y, and, ah, computes the gradient on that particular example and then just updates, ah, you know the weights based on that. Okay. So hopefully this should be all, you know, review. Okay. So now, I'm going to ask the following question. You know, let's be a little bit philosophical here? So what is the true objective you know, of machine learning? So how many of you think it is to minimize the error on the training set? Show of hands. No one? This is what we've been talking about, right? We've been talking about minimizing your own training sets. Okay, well, uh, maybe that's, um, maybe that's not right then. Um, what about minimizing, ah, training error with regularization? Because regularization is probably a good idea. How many of you think that's, ah, that's the goal? What about minimizing error on the test set? Okay, seems like it's closer, right? You know, the test sets. Test accuracies are things maybe you care about. Um, um, what about minimizing error on unseen future examples? Okay. So the majority of you think that's the right answer. What about, ah, learning about machines, and that's the true objective. Who doesn't want to learn about machines? That's actually the true objective. Now, um, so the correct answer is minimizing error on unseen future examples. So I think all of you have an intuition that we are doing some machine learning, we're learning on data, but what we really care about is how this predictor performs on in the future. Because we're going to deploy this in a system, and it's going to be the future, it's going to be unseen. Um, and then- but then- okay, so then how do we- do you think about all these other things? You know training set, regularization tests. So that's going to be something we'll come back to, um, later. Okay. So there's two topics today, I wanna talk about generalization, which is I think a pretty subtle but important thing to keep in mind when you're doing machine learning. Um, and then we're going to switch gears and talk about unsupervised learning where we don't have labels, but we can still, um, do something. So we've been talking about training loss, right? We've- you know, I've made a big deal about, you write down what you want, and then you optimize. So the question is like, ah, is this training loss a good objective function? Um, well, let's take this literally. Suppose we really wanted to just minimize the training loss, what would we do? Well, here- here's an algorithm for you. So you just store your training examples, okay. And then you're going to define your predictor is as follows. So if you see that particular, ah, example in your training set, then you're just going to output, um, ah, the out- the output that you saw in the training set. And then otherwise, you're just going to segfault. And this is going to crash, [BACKGROUND] right? So this is great. It minimizes the training error perfectly. It gets zero loss assuming your training examples don't have, ah, conflicts. But you know you're all laughing because this is clearly a bad idea. So somehow purely following this minimizing training set objective is, ah, error objective is not really the right thing. Um, So this is an example- a very extreme example of overfitting. So overfitting is this phenomena that you see where you, ah, have some data and usually the data has some noise, and you are trying to fit a predictor, but you're really trying too hard, right? So if you're fitting this, um, green squiggly line, you are fitting the data and getting zero training error, but you're kinda missing the big picture which is you know this black curve, or in grant regression. Some of you you've probably seen examples where you have a bunch of points usually with noise, and if you really try hard to fit the points and you're gonna get zero error, but you're kinda missing this general, ah, trend. And overfitting can really kind of bite you if you're not careful. So let's try a formula- formalize this a little bit more. How do we assess whether a predictor is good? Because if we can't measure it, we can't really, um, you know optimize it. Okay. So, um, the key idea is that we really care about error on unseen future examples, okay? So this is great as you know, uh, aspiration to write down. But the question is how do we actually you know, optimize this, right? Because it's the future and it's also unseen. Um, so you know high-definition, we've had, get a handle, get a handle on this. Um, so typically what people do is, ah, the next best thing which is you gather a test set, which is supposed to be representative of all the types of things you would see. And then you guard it, uh, carefully, and make sure you don't touch it too much, right? Because, you know what happens if you, ah, start looking at the test set and- you know, or the worst case that you train on a test set, right? So you know, the test set being a surrogate for unseen future examples, um, just completely goes away, right? And even if you start looking at it and you're- you're really trying to optimize it, um, you can get you know, into this overfitting regime, right? So really be careful of that. And I want to emphasize that the test set is really a surrogate for what you actually care about. So, um, don't blindly just, you know, try to make test accuracy numbers go up at all costs, okay? Okay. So, um, but for now let's assume we have a test set though, you know, we have to work with. So there- there's this kind of really peculiar thing about machine learning, which is this leap of faith, right. You- the training algorithm, um, is only operating on a training set. And then all of a sudden, you go to these unseen examples or the test set and you're expected to do well. So why is there- why would you expect that, you know, to happen? And as I alluded to on the first day of class, there is some kind of actually pretty deep mathematical reasons for why this might happen, um, but, you know, rather go and get into the math, I just kinda wanna give you a maybe intuitive picture of how to think about, um, this- this gap. Okay. So remember, ah, we had this picture that it's, ah, of all predictors. So these are all the functions that you could possibly want in your wildest dreams. Um, and then when you define, um, a feature, ah, extractor or a neural net architecture or any sort of, um, you know, a- a structure. You're basically saying, "Hey I'm only interested in these sets of functions, not all functions." Okay. And then learning is, um, trying to find some element of the- the class of functions that you've, ah, set out, ah, to find. Okay. So there's a decomposition which is useful. So let's take out this point G. So G is going to be the best, um, function in this class. The best predictor that you can possibly get. So if some oracle came and set your neural net weights to something, how well could you do? Okay. So now there's two gaps here. One is approximation error. Approximation error is the difference between F star which is the- the true, ah, predictor. So this is the thing that always gets the right answer and G which is the best thing in your, ah, class. Okay. So this really measures how good is your hypothesis class. Remember last time we said that, we want hypothesis class to be expressive. If you only have linear, ah, functions and your data looks, ah, sinusoidal then that is not expressive enough to capture, um, the data. Okay. So the second part is estimation error. This is the difference between the best thing in your hypothesis class and the- the function you actually find. Right. And this measures how good is a learned predictor kind of relative to the potential of the hypothesis class. You define this hypothesis classes, um, here are things that I'm willing to, you know, consider but at the end of the day based on a finite amount of data, you- you can't get to G. You only can kind of estimate, um, you know, some- you do a learning and you get to some F hat. So, um, in kind of more mathematical terms, if you look at the error of the thing that your learning algorithm actually returns minus the- the era of the best thing possible which, you know, in many cases is, you know, zero. Um, then this can be written as follows. So all I'm doing is minus- subtracting error of G and adding error of G. So this is the same quantity as this, um, but then I can look at these two terms. So the estimation error is the difference in error between the thing that your learning algorithm produces minus the best thing in the class G. And then the difference- approximation error is the difference between the error of G and error of F star. Okay. So this is going to be useful as a way to kind of conceptualize, um, the different trade-offs. Right. So, you know, just to kind of explore this a little bit. Suppose I increase the hypothesis class size, right, so I add more features or I, you know, add, ah, increase the dimension of my neural networks. Um, what- what happens? So the approximation error will go down. And why is that? Because we're taking a minimum over a larger set. So, um, G is always the minimum possible error over the set F. And if I make the set larger, I have just more possibilities of driving the error down. Okay. So the approximation error is gonna go down, um, but the estimation error is going to go up, right? As I make my hypothesis class more expressive. And that's because it's harder to estimate something more complex. So I'm leaving it kind of vague right now. There's a mathematical way to formalize this which, um, you can ask me about offline. Okay. So you can see there's kind of tension here. Right, you really want to make your hypothesis class large so you can, um, drive down the approximation error but you don't want to make it too large that it becomes impossible to, um, estimate. Okay. So now we have this kind of abstract frame work. What are some kind of knobs we can tune? How do we control the size of the hypothesis class? So we're gonna talk about two essentially classes of, um, types of ways. So strategy one is, um, dimensionality. So remember for linear classifiers, a predictor is specified by a wave vector. So this is D numbers, right? And we can change D. We can make D smaller by removing features. We can make D larger by adding features. And, ah, pictorially you can think about as reducing D as reducing the dimensionality of your- of your hypothesis class. So if you are in three dimensions, you have three numbers, three degrees of freedom, you have this kind of a ball and you- if you re- remove one of the dimensions now you have this, ah, ball or a circle in two dimensions. Okay. So concretely what this means, is, ah, you can manually add uh, you know, this is a little bit heuristic. You can add features if they seem to be, you know, helping and remove features if they don't, ah, help. So you can, um, kind of modulate the dimensionality of your, ah, weight vector. Or there are also automatic feature selection methods, um, such as boosting or L1 regularization, um, which are outside the scope of this- this class. Um, to be- if you take a machine learning class you'll learn more about this, um, this stuff but the main point is that, ah, you can determine by setting the number of features you have. You can, um, vary the expressive power of your hypothesis. Okay. So the se- second strategy is, um, looking at the norm, or the norm, or the length of a vector. So this one is maybe a little bit less, um, obvious. Um, so again for linear, ah, predictors, the wave vector is just, ah, a d-dimensional vector and you look at how long this vector is. So what is - and the length, um, pictorially it looks like this. So if you have, um, let's say all the weight vectors in, um, each W can be ex- thought about as a point as well. So this circle contains all the weight vectors up to a certain length. And if by making this smaller, now you're considering, you know, a smaller number of weight vectors. Okay. So at that level it's, um, perhaps intuitive. Um, so what does this actually look like? Um, so let's suppose we're doing one-dimensional linear regression and here's the board. Um, and we're looking at, um, x y. Um, so remember what- and in one-dimension, um, we're- all we're looking at is, um, you know, W is just a single number. Right? And the number represents the slope of this line. So by saying, um, you know, let's draw some slopes here. Okay. Um, so by saying that, ah, the weight vector or the weight is a small magnitude, that's basically saying the slope is, ah, you know, smaller or closer to 0. So if you think about, um, you know, slope equals- let's say this is slope equals 1, so W equals 1. So anything- anything let's say, ah, less than 1 or greater than minus 1 is fair game. And now if you reduce this to half, now you're looking at a kinda a smaller, um, window here and if you keep on reducing it, now you're basically, um, converging to, you know, essentially very flat and constant functions. Okay. So you can understand this two ways. One is just that the total number of, um, possible weight vectors you're considering, it's just shrinking because you're putting more constraints. They have to be, you know, smaller. From this picture you can also think about it as, what you're really doing is, um, making the function, you know, smoother. Right? Because, um, a flat function is kinda the smoothest function. It doesn't kind of, you know, vary too much and, ah, a complicated function is one that can go, you know, very- jump up very steeply and, you know, for quadratic functions can also come down really quickly. So you get a kind of very, ah, wiggly functions. Those are- tend to be more complicated. Okay. Any questions about this so far? Yeah? Um, trying to not overfit. So like what if we had like latent structures within the data set that sensor tra- that says if you try to like not overfit we're really just kind of like this tricking ourselves like a perpendicular set of like distributions that we say, "Okay. This data must have like come from like something normal, it must have come from something reasonable." But saying that we're like not really capturing the full- the full like scope of our data sets. Um, I'm not sure, so let's see. So the- so the question is if there's a particular structure inside your data set, for example, if some, um, things are sparse or low rank or something, um, you know, how do you capture that with a regularization? Regularization. But you have like, perhaps not even just like price spikes like this. Like if you have a causal model inside, inside between your like parameters like how would you like, would a regularization like impede some of those relations? Oh yeah so um, so all of this is kind of very generic, right? You're not making any assumptions about the, the what the classifier is or the features is. So they're kinda like big cameras that you can just apply. So if you have models where you have more structural domain knowledge or if you, um, which we'll see. For example if you have you know, Bayesian networks later in the class then there's much more you can do. And this is just kind of you know, two techniques for as a kind of a generic way of controlling for overfitting. Yeah. Making sure I'm understanding correctly. This approach is actually creating constraints on each element in the vector W that the magnitude of it versus the other one was actually counting elements in a potential vector W? Yeah so um, so let's look at W here. So let's say you're in 3-dimensions. So W is W1, W2, W3. So the first method just says okay let's just kill some of these um, elements and make it smaller. This one is saying that I mean formally, it's looking at the squared values of each of these and looking at the square root, that's what the norm is. So it's saying that each of these should be, you know, small according to this particular metric. Yeah. [NOISE] Yeah so that's what I'm going to get to. So this is just kind of giving you intuition for in terms of hypothesis classes and how you want them to be small. How do you actually implement this? Um, you know there's several ways to do this but the most popular way is to add regularization. And by regularization what I mean is take your original objective function which is train loss of W and you just add this, um, penalty term. So Lambda is called the regularization strength. It's just a positive number let's say- let's say 1. And this is the squared length of W. Okay, so what is this doing is by adding this it's saying that, "okay optimizer you should really try to make a train loss small but you should also try to make this small as well." Okay. And there's uh, if you study convex optimization there's kind of this duality between, um, this- this is called the Lagrangian form where you have a penalize objective where you add a penalty on the weight vector and the constraint form where you just say that I want to minimize training loss with subject to the norm of W being less than some value. But this is more of that kind of a typical one that you're going to see in practice. Okay, so here's objective function. Great. How do I optimize it? Yeah, I think i will use the same W leading into the train Yeah. Okay, so the process of minimizing train loss is further minimized. Yeah so it's important that these be the same W and you're optimizing the sum. So the optimizer is going to make these trade-offs. If it says, "Oh okay, I can drive the training loss down. But if this is shooting up then that's not good and it'll try to balance these two." Yeah, [NOISE] it's basically saying try to fit the data but not at the expense of, uh, having huge weight vectors. [NOISE] Yeah, so if there's another way to say it is that, um, kind of think about Occam's razor. It's saying if there is a simple way to fit your data then you should just do that. Instead of finding some really complicated weight vector that fits your data, so prefer simple solutions. Okay. So once you have this objective you know we have a standard crank we can turn to turn this into an algorithm. You can just do gradient ascent. Um, and the you know if you just take the derivative of this then you have this gradient. And then you also have Lambda W which is the gradient of this term. So you can understand this as basically you're doing gradient descent as we were doing before. Um, and now all you're doing is you're shrinking the weights towards 0 by lambda. So Lambda is a regularization strength. If it's large that means you're trying to really kind of push down on the magnitude of the weights. So the gradient optimizer is basically going to say, hey I'm going to try to step in a direction that makes the training loss small but then I'm going to also push the weights towards 0. Okay. In neural nets literature this is also known as a weight decay. And in optimization and statistics it's known as L2 regularization because this is the Euclidean or 2-norm. Okay so here is another strategy which intuitively gets at the same idea but it's in some sense you know, more crude. So it's called early stopping. And the idea is very simple. You just stop early instead of going and training for 100 iterations you just train for 50. Okay. So why, why does this, why is this a good idea? Um, the intuition is that if you start with the weights at 0, so that's the smallest you can make the norm of W, right? So every time you update on a training you know, set, generally the norm goes up, you know there's no guarantee that it will always go up but generally this is what happens. So if you stop early that means you are giving less of an opportunity for the norm to grow, grow. So fewer updates translates to generally a lower norm. You can also make this formal mathematically but the connection is not as tight as the explicit regularization from the previous slide. Okay, so the lesson here is you know try to minimize the training error but don't try you know, too hard. Yeah, question? It depends on how we initialize the weights? Question is, does this depend on how we initialize the weights? Most of the time you're going to initialize the weights from you know, some sort of weights which is kind of a baseline either 0, or for neural nets maybe like random vectors around 0 but they're pretty small weights and usually the weights grow from outside from small to large. There's other cases where if you think about your pre-training, you have a pre-trained model you start with some weights and then you do gradient descent from that. Then you're saying basically don't go too far from your initialization. Yeah. This means that like want to [inaudible] like focus on the train loss [inaudible] ? Right. So the question is why aren't we focusing on minimizing the train loss, or why focus on w? It's always going to be a combination. So the optimizer is still trying to push down on the training loss by taking these gradient updates, right? Notice that the, the gradient with respect to the regularizer actually doesn't come in here. It kind of comes in explicitly through the fact that you are stopping it early. But it's always kind of a balance between, uh, minimizing the training loss and also making sure your, um, class- classifier weights doe- doesn't get too complicated. Yeah. How do you decide what value of lambda of T to set as? Yeah. So the question is how you decide the value of T here, and how you decide the value of lambda? [NOISE] So these are called hyperparameters, and I'll talk a little bit more about that later. Okay. So here's the kind of the general philosophy, uh, that you should have in machine learning. So you should try to minimize the training error, because really, that's the only thing you can do. That's your data, and that's, you know, you have your data there, but you should try to do so in a way that keeps your hypothesis small. So try to minimize the training set error, but don't try too hard. I guess it's the, it's the lesson here. Okay. So now, going back to the question earlier. If you notice through all these, um, my presentation, there's, there's all sorts of properties of the learning algorithm, you know, which features you have, which regularization parameter you have, the number of iterations, the step size for gradient descent. Um, these are all considered hyperparameters. So, so far, they're just magical values that are given to the learning algorithm, and the learning algorithm runs with them. But someone has to set them, and how do you set them? [inaudible]? Yeah. You can ask me, uh, I don't know the answer to that. [LAUGHTER] Um, okay. So here- here's an idea. So let's choose hyperparameters to minimize the training error. So how many of you think that's a good idea? Okay. Not too many. So why is this a bad idea? Yeah. You can over-fit, right? So suppose you took, uh, lambda and you say, "Hey, um, you know, let's choose the lambda that will minimize the training error." Okay. And the, the learning algorithm says "Well, okay, you know, I wanted to make this stat go down. What is this doing in the way? Let's just set lambda to 0, and then I don't have to worry about this." So it's kind of, um, you know, cheating in a way. And also, early stopping would say like, don't stop, just keep on going because you're always going to drive the training error lower and lower. Okay. So that's not good. So how about, um, choosing hyperparameters to minimize the test error? How many of you say, "Yeah, it's a good idea"? Yeah. Not, not so good, it turns out. Um, so why? And this is again stressing the point that the test error is not the thing you care about. Because what happens when you look at that- uh, we, we try to use a test set, then it becomes an unreliable estimate of the actual unseen error. Because if you're tuning hyperparameters on the test set, that means that, um, it's no longer- it becomes less and less unseen and less future. Yeah. [inaudible]. Yeah. So we could do cross-validation which I'll describe in a second. Okay? So I want to emphasize this point. When you're doing your final project, you have your test set, you have it sitting there, and, uh, you should not be, you know, fiddling with it too much or else, um, it becomes less reliable. Okay. So you can't use the test set, so what do you do? So here's the idea behind, uh, a validation set, it's that you take your training set, and you sacrifice some amount of it, maybe it's, you know, 10% maybe 20%, and you use it to estimate the test error. So this is a validation set, right? The test set is, you know, off to the side, it's locked in a safe, uh, you're not gonna touch it. And then, um, you're just gonna tune hyperparameters on the validation set, and use that to guide your model development. So, the, um, the proportion itself is not a hyperparameter? The proportion itself, uh, [LAUGHTER] is a hyper, hyperparameter. You know, I- us- yeah, you know, I usually don't tune that. I mean, usually, it's- how you choose it is, um, kind of this balance between you want the validation set to be large enough so it gives you reliable estimates, but you also want to use most of your data for training. Yeah. How do you choose like lambda and like the other like T? How do we choose those hyperparameters? Yeah. So how do you choose the hyperparameters? Um, so the, the answer is you try a particular value, so, so you- for example, try let's say lambda equals, um, 0.01 and 0.11, and then you run your algorithm and then you look at your validation error, and then you just choose the one that has the lowest. Yeah. It's pretty crude but [NOISE] yeah. [inaudible] and I got a hyperparameter without just doing like a, like a, like a search, like try this one then try this one, then try this one. Yeah. So how- is there a better way to search for hyperparameters? Um, you could do, uh, your, er, grid search generally is fine, random sampling is fine. There's fancier things based on Bayesian Optimization which might give you some benefits but it's actually the jury's out on that and they're more complicated. Um, there's also you can use better, um, learning algorithms which are less sensitive to the step size. So you don't have to nail it like, "Oh, 0.1 works but 0.11 doesn't." So you don't- you don't want that. But in all of the high-level answer is that there's no, um, real kind of principled way of like here's a formula that lambda equals and you just evaluate that formula, and you're done, um, because there's this is, you know, the kind of the- uh, I don't know, the dirty side of machine learning, there's always this tuning that needs to happen to get your, you know, good results. Um, yeah. Question over there. [inaudible] is this process usually automated or is this manual? So the question is, uh, is this process automated? Increasingly, it becomes much more automated. So, um, it requires a fair amount of compute, right? Because usually, if you have a large data-set, even training one model might take a little while. And now, you're talking about, you know, training let's say 100 models. So it can be very expensive and there's things that you can do to make it, uh, faster. But I mean, in general, I would advise that don't hyperparameter tune kind of blindly, especially when you're kind of learning the ropes. I think doing it kind of manually and getting intuition for what, uh, step size, um, like factor of step size algorithm is still valuable to have. And then once you kind of get a hang of it, then maybe you can automate. But I wouldn't try to automate too, you know, early. Yeah? Small changes of hyperparameters need to vary big changes in prediction accuracy, is that considered [inaudible]? Yeah. So your question is, if you change the hyperparameters a little bit and that causes your, um, training or, or model performance to change quite a bit, does that mean your model's not robust? Uh, yeah, it means your model is probably not as robust. And sometimes, you actually don't choose a hy- hyperparameter set at all, and you still get varying, you know, model performances. Um, so, you know, you should always check that first because there could be just inherent randomness, especially if you're doing neural networks that could get stuck in local optimum, there's all sorts of, um, you know, things that can happen. Okay. Final question now so we can move on. So we found out that the optimal hyperparameter, is it [inaudible]? Uh, so how do you choose, uh, an optimal hyperparameter? So you basically have like a for loop that says for lambda in, you known, 0.1, 0.011, whatever values for t equals, uh, you know, something. Um, you train on this- all these training examples with manual validation, and then you test the model on the validation, you get a number, and you just use, uh, whichever setting gives you the lowest number. [inaudible]? I'm sorry? We- we do have know the numbers it's not like the uh, [inaudible]? Yeah. Usually, you just have to be in the ballpark. You don't have to get like 99 versus 100. The, the things I would just advise is like you know, let's say what kind of orders of magnitude. Because if it- if it really matters like being down to a precise number, then, um, you probably have other worry- things to worry about. Okay. Let's move on. So what I'm gonna do now is go through a kind of a sample, uh, problem, right? Because I think the, the theory of machine learning and the practice of it, are actually kind of quite different in terms of the types of things that you have to think about. Um, so here's a simplify named entity recognition problem. So named entity is this, uh, recognition is this popular task in NLP where you're trying to find names of, uh, people and locations and, um, organizations. So the input is a string, um, where, which has, you know, a particular, potentially named with, uh, the left and right context words. Okay. And the, the goal is to predict whether, um, this x contains, you know, if they're a person, um, which is plus 1 or not. Okay. So, so here's the, the recipe for success. Um, when you're doing your fin- final project or something, um, you get a data set, um, it have- if it hasn't been already split, split it into train, validation, and test, and lock the test set away. And then, first, I would try to look at the data to get some, you know, intuition, you know. Al- always remember, you want to make sure that you understand your, your data. Don't just immediately start coding up the most fancy algorithm you can think of. Um, and then you repeat. You implement some, you know, feature, maybe change the architecture of your network, um, and then you tune some, you, you set some hyperparameters and you run the learning algorithm, and then you look at, uh, the, the training error and validation error rates, um, to see, you know, how they're doing, if you're underfitting or overfitting. Um, in some cases, you can look at the weights for linear classifiers, um, in, for neural nets it might be a little bit harder. And then you- I will recommend look at the predictions of your model. I always have- I always try to log as much information as I have. You can, so that you can go back and understand what the model is, you know, trying to do. And then you brainstorm some improvements, and you kind of do this until, uh, you either, are happy or you run out of time, and then you run it on that final test set and you get your final error rates which you put in your, uh, report. Okay? So let's go through an example of what this might, uh, look like. Um, so this is going to be based on the code base for the sentiment homework. Um, so, okay, so here's where we're starting. We're reading, uh, a training set. Let's look at this training set. So there are, you know, 7,000 lines here. Each line contains the label which is minus 1 or plus 1, along with the input, which is going to be, uh, you know, remember, the left context, the actual entity and the, the right context. Okay? All right. So you also have a development or validation set. Um, and what this code does is, eh, it's gonna learn a predictor, which, uh, takes the training set and a feature extractor which we're gonna fill out. Um, and then it's gonna output either, uh, both the, the weights and, um, some error analysis which you can look- use to look at the predictions. And finally, there's this test which I'm gonna not do for now. Okay. So, um, so the first thing is, uh, let's define this feature extractor. So this feature extractor is, uh, Phi of x. And we're gonna use the sparse, uh, you know, map representation of, of features. So Phi, Phi is, um, there's this really nice community structure called defaultdict. So this is kind of like saying, you have, uh, you know, uh, you know, a map, but, um, you can't, you know, access it. And if the element is in there, then you return zero. Um, okay. So Phi equals that, and then you return Phi. Okay. So this is the, the simplest feature vector you can come up with. Um, the dimensionality is zero because you have no features.x Okay. So- but, you know, we can run this and see how we do on this. Okay. So let's run this. Um, okay. So over a number of iterations, um, you can see that learning isn't doing anything because there's no way it's still updating. Okay. So- but, you know, it doesn't crash, uh, which is good. Um, okay. So I'm getting 72%, uh, error, which is, you know, pretty bad but I haven't really done anything. So, um, that's to be expected. Okay. Where did my window go? Okay. So now, let's, um, start defining some, you know, features. Okay. So remember, what is x? X is something like, uh, um, took Mauritius into, right? So there's this entity on left and right. So let's break this up. So I'm going to, tokens equals x.split. So that's gonna give me a bunch of tokens, and then I'm going to define left entity, right equals. So this is the- token zero is the left, that's gonna be took. Um, tokens 1 through minus 1 is gonna be everything until the last token, and then tokens minus 1 is the last one. Okay. So now, I can define, um, a feature template. So remember, a good- nice way to go about it is to define a feature template. So I can just say entity is, um, ent- blank. Um, that's how I would've write- written it as a feature template. In code, um, this is actually pretty, you know, transparent. It's saying, I'm defining a feature which is going to be one, um, for this, uh, you know, feature template. So entity is gonna be some value, I plug it in, I get a, a particular feature value or feat- feature name. And I'm gonna set that feature name to be- have a feature value of 1. Okay. So let's, uh, run this. Okay, so, um, let's go over here, run it. Uh, oops. Um, so entity is, uh, a, a list. So I'm going just gonna turn it into a string. [NOISE] Okay. So now, I'm getting, uh, what happened? So the, um, the training error is, uh, pretty low, right? I'm basically fitting the training error pretty, uh, for training set pretty well. But, you know, noticed, I, I don't, we don't- I don't care about the training, so I care about the, uh, tester. So just one note, it says test here but it's really the, the validation, um, should probably change that. Um, it's just what- whatever non training set you passed in. Okay. So this is still a 20% error which is not great. Okay so, uh, at this point, remember, I wanna go back and look at, um, some, you know, get some intuition for what's going on here. So let's look at the weights. Okay. So this is the weight vector that's learned. So for this weight, er, uh, feature, the weight is 1, and all of these are 1. And this, you know, corresponds to the names that, um, the people names that have been seen at training time. Because whenever I see a person name then I'm going to, um, you know, give that feature a 1, so I can get that training example right. And if you look at the bottom, these are the entities which, uh, are not people names. Okay. So this is a sanity check that it's doing what it's, um, you know, supposed to do. Um, so the nice thing about with these kind of really interpretable features of that, you can kind of almost compute the- what the weight should be in your- in your head, yeah. [inaudible] one feature for every, almost every example that you learn? Yeah. Okay. Yeah. Yeah. So I have one- essentially, one feature for every entity, which has almost, you know, number [OVERLAPPING]. Most of them are unique. Yeah. So there's 3,900 features here. [inaudible] [NOISE]. Uh, so we're gonna change that. But, um, we- we'll get. We're not done yet. Okay so, okay, so the other thing we wanna look at is, um, the error analysis. Okay. So this shows you- here is an example, Eduardo Romero. Um, the ground truth is positive but we predicted minus 1. And why do we predict minus 1? It's because this feature, uh, has weight 0. And why does it have weight 0? Because we never saw this name at training time. Okay? Um, we did get some right, we saw Senate at training time and we just rightly, uh, predicted that was minus 1. Okay. But, you know, you look at these errors and you say, "Okay. Well, you know, this is, um, maybe the- we should add more features." Okay? So if you look- remember, this, um, you know, example here. Maybe the context helps, right? Because if you have governor, blank, then you probably know it's a person because only people can, you know, be governors. Uh, so let's add a feature. So I'm gonna add a feature which is, uh, left is left. [NOISE] And for symmetry, I'll just add right is right. Okay. So this, eh, defines some indicator features on, you know, the context. So in this case, it will be took him into. [NOISE] Okay. So now, I have three feature templates. Let's go and train this model. Um, and now I'm down to just, uh, 11% error. Okay. So I'm making some progress. Um, oops, um, let's look at the error analysis. Okay? So now, I'm getting this correct. Um, and let's look at what else am I getting it wrong. So Simitis blamed, um, you know, Felix Mantilla. And, you know, again, it hasn't seen, um, this exact, uh, actually, maybe it, uh, did see this string before, but it still got it wrong. Um, uh, you know, I think there's kind of a general intuition though that, well, if you have, you know, Felix, um, you know, even if it- you've never seen Felix Mantilla. If you see Felix something, you know, chances are it probably is a person, um, not always but, ah, as- as we noted before features are not meant to be like deterministic rules. They're just pieces of information which are useful. So let's go over here and we want to define let's say a feature for every, ah, possible word that's in- in entity. So word and entity. Remember, entity is a list of tokens which occur between left and right. And I'm gonna say entity contains a word. Okay? So now let's run this again and now I'm down to 6% error which is, you know, a lot better. Um, if you look at the error analysis, um, so I think the F- maybe the Felix example and now I get this right. Um, and, you know, what else- what else can I do? Um, so you know what I'm- kind of this general strategy here I'm, ah, following here is, um, you know, which is not always the- necessarily the right one but you start with kind of very, uh, very specific features and then you try to kind of generalize, you know, as you go. Um, so how can I generalize this more, right? So if you look at, um, worker, so Kurdistan, right? If your word ends in stan, um, or then- I mean may- maybe it's, ah, less likely to be a person. I actually don't know but, you know, maybe like suffixes and prefixes, um, you know, are helpful too. So, um, I'm going to add features. Let's say entity contains prefix and then I'm going to let's say just, you know, heuristically look at the first four tokens, um, and suffix the last four tokens. Um, and then run this again and now I'm down to, you know, 4% error. Um, okay. I'm probably gonna, you know, stop right now. Um, at this point, you can, um, actually run it on your test set and we get, um, you know, 4% error as well. Yeah. [BACKGROUND] Oh, yeah. I guess, um, this was, um, all planned out so that the test error would go down. But actually more often than not, you'll add a feature that you really, really think should help. But it doesn't help for whatever reason, so. [inaudible] cause of that certainly not get worse. We agree that the cause of due date, sorry, not get [inaudible] with whatever cause, would it get worse? Yeah. You- you s- yeah, some of the time, you- yeah, it doesn't move. Uh, that's kind of probably the more of the time but sometimes it can go up, if you add a really, you know, bad feature or something. [inaudible] don't consider this at all, you know, it says here. So the more features you add generally the training error will go down, right? So all the algorithm knows is like it's driving training error down, so it doesn't know that. It doesn't, you know, generalize. Yeah. Okay. So this is definitely the happy path. I think when you go and actually do machine learning, it's going to be more often than not, ah, the test error will not go down. So don't get too frustrated. Um, just keep on trying. Yeah. Are we expected to keep optimizing after like 5% error? [NOISE] Um, are you expected to optimize after 5% error? Um, it's- it really depends, um, um, you know, there's kind of a limit to every data set. So data sets have noise. So sometimes, you- you shouldn't definitely not optimize below the noise, ah, limit. So one thing that you might imagine is, for example, um, you have an oracle which, um, let's say it's, uh, human agreement. Like if your data set is annotated by humans and if humans can't even agree like 3% of the time, then you can't really do better than 3% of the time, as a general rule. There are exceptions, but- okay. Any other questions? Yeah. Uh, kind of like through all your training, you're happy and then you- you see the kinds of errors, um, hence in fair view applications, say in the advent you try and test that and you find it's not good, um, what do you do? Oh, yeah. What happens if you accidentally, ah, if you train on the test that and it's not good. Um, that's you- to say that it's not good [LAUGHTER] in some level. So there's many things that could happen. One is that your test set might actually be different for whatever reason. Maybe it was collected in a different day and, um, your performance just doesn't hold up on that test set. Um, in that case, well, that's your test error, right? Remember, the test error is just- if you didn't look at it, it's really a honest representation of how good this model is. And if it's not good, well, that's just the truth. There wasn't- your model is not that good. In some cases there are some like bug, like something was misprocessed in a way and it wasn't really fair. So, you know, there are cases where you want to like investigate if it's like way off the mark. If I had gone like 70% error, then maybe you- something was wrong and you would have to go investigate. But if it's in the ballpark and whatever it is, that's kind of what, um, you have to deal with, right? So what you wanna do also is make sure your validation error is kind of representative of your- if your test error so that you don't have, you know, surprises at the end of the day, right? I mean it's- I think fine, er, to run it on a test set, um, just to make sure that there's no catastrophic problems but the- the kind of aggressive tuning on a test set is something that would, you know, have uh, warned against. Um, yeah. Is there any sort of standard as to how you should split the data into train that and like validation testing. Generally, like what percentage of your data you should allocate to each one, just randomize it or- Um, yeah. So the question is how do you split, uh, into train, validation and test? Um, it- it depends on how large your data set is. So generally people, um, you know, shuffle the data and then randomly split it into test validation and- and train. Um, maybe let's say like 80%, 10%, 10% just as a kind of, ah, a rule of thumb. There are cases where you don't wanna do that. Um, there's cases where you, for example, wanna train on the past and test on the future because that simulates the more realistic settings. Um, remember, the test set is meant to kind of be representative as possible of the situations that you would see at- in the- in the real world. Yeah. Have like, uh, some examples or something like labeled plus one and minus one. Do you have to do that manually? So the question is that dataset was labeled. There's 7,000 of them. Um, I personally did not label this dataset. [BACKGROUND] This is a center dataset that, uh, someone labeled, um, you know, sometimes these data-sets come from, um, you know, crowd workers, sometimes they come from, you know, experts. Um, yeah, it varies. Um, yeah, sometimes they come from grad students. It's actually a good exercise to go and label. I've labeled a lot of data also, in my life, um. [BACKGROUND] Yeah, exactly. Okay, let's go on. So switching gears now, let's talk about unsupervised learning. So, so far we've talked about supervised learning where the training set contains input-output pairs. So you are given the input and this is the output that your predictor should output. Um, but, you know, uh, this is very, uh, timely. Um, we were just talking about how fully labeled data is very expensive to obtain because, you know, 10,000 is actually not that much, you know, you can often have, you know, 100,000 or even a million examples which, uh, you do not want to, um, be sitting down and annotating yourself. Um, so here's another possibility, so unsupervised learning. Unsupervised learning, the training data only contains inputs and unlabeled data is much cheaper to obtain in certain situations. So for example if you're doing text classification, you have a lot of text out there. People write a lot on the Internet and you can easily download, you know, gigabytes of text and all that is unlabeled. And yeah, you can do something with it. That would be, you know, you turn that into gold or something. Um, and also images, videos, um, and so on. Um, you know, it's not always possible to obtain unlabeled data. For example, if you have, you know, some device that is producing, uh, data and you only have one of that device that you built yourself, then, you know, you're not going to be able to get that much data. But we're gonna focus on a case where you do have basically infinite amount of, uh, data and you want to do something with it. Um, so here's some examples I want to share with you. This is a classic, uh, example from NLP that goes back to, um, you know, the early 90s. So if these ideas were clustering, the input you g- will have a bunch of raw text, lots of news articles and you put it into this algorithm, which I'm not going to describe, but I'm going to look at- we're going to look at the output. So what is this output? It returns a bunch of clusters where for each cluster, it has a certain set of words associated with that cluster. Okay, and when you look at the clusters, they're pretty coherent. So this is roughly- the first cluster is days of the week, second cluster is months, um, third cluster is some sort of, uh, you know, m- m- materials, um, um, fourth cluster is, uh, synonyms of like, you know, big, and so on. And, you know, one th-, one thing though, the critical thing to note is that the input was just raw text. Nowhere did someone say, "Hey, these, these are days of the month, learn them and I'll go test you later." It's all unsupervised. So this is actually, um, you know, on a personal note, the kind of th- th- the, uh, example when I was doing a Masters, uh, that got me into doing NLP research because I was looking at this and I was like, "Wow, you can actually take unlabeled data and actually mine-" really interesting kind of signals, you know, out of it. Um, more recently, there's these, uh, things called word vectors, uh, which do something very similar instead of clustering words, they embed words in, uh-, into a vector space. So if you zoom in here, um, each word is associated with a particular position and, uh, s- words which are similar actually t- happened to be close by in vector space. So for example, these are country, um, names, these are, uh, pronouns, these are, you know, years, months, and so on. Okay? So this is kind of operating on a very similar principle. Um, there's also contextualized word vectors like, um, Elmo and Bert if you've, you know, heard of those things which have been really taking the NLP community by storm m- more recently. On, on the v- vision side, you also have, uh, the ability to do unsupervised learning. Um, so this is an example from 2015 where you run, um, a clustering algorithm which is also jointly learning the features during this kind of deep neural network and it can identify, um, different types of digits: zeros, and nines, and fours that look like nines, threes and- or fives that look like three's and so on. So remember this is not doing classification, right? You're not, um, uh, telling the algorithm, "Here's our fives, here's our twos." It's just looking at examples and finding the structure that, "Oh, these are kind of the same thing and these are also the same thing." And sometimes but not always, these clusters actually correspond to labels. Um, so here's another example of, um, um, ships, planes, and birds that look like planes. Um, so you can see kind of this is not doing classification, it's just kind of looking at visual similarity, okay? All right so the general idea behind supervised learning is that, you know, data has a lot of rich latent structure in that, in that. And in that, by that mean- I mean there's, there's kind of patterns in there. Um, and we want to develop methods that can discover this structure, you know, automatically. So there's multiple types of unsupervised learning. There's clustering, dimensionality reduction. Um, um, but we're going to focus on, you know, clustering- in particular K-means clustering for, um, this lecture. Okay. So let's get into it more formally. So the definition of clustering is as follows. I give you a set of points, so x_1 through x_n and you want to output an assignment of each point to a cluster, and the assignment variables are going to be z_1 through z_n. So for every data point, I'm going to have a z_i that tells me which of the K clusters I'm in, 1 through K, okay? So pictorially this looks like this on the board here where I have, uh, let's say, uh, let's say I have seven points. Okay. And if I gave you only these seven points and I tell you, "Hey, I want you to cluster them into two clusters, " you know, intuitively, you can kind of see maybe there's a left cluster over here and a right cluster over here, okay? Um, but how do we formulate that kind of mathematically? So, um, here's the, K-means objective function. So this is the principle by which we're going to derive, um, clusterings, okay? So K-means says that, uh, every cluster, there's going to be two clusters, is going to be associated with a centroid, okay? So I'm gonna draw a centroid and, um, uh, a red square here. And the centroid is a point in the space along with the, uh, you know, the data points. And, um, I'm gonna th- this is kinda representing where the cluster is. And then I'm going to associate each of the points with a particular centroid. So I'm going to denote this by a blue arrow pointing from the point into the centroid, um, and, you know, these two quantities, um, are going to kind of represent the clustering. I have the locations of the clusterings in red and also the assignments of the points into the clusters in, in blue. Okay, so of course neither the red or the blue are known and that's something we're going to have to optimize. Okay, so, but first we have to define, um, what the optimization, uh, objective function is. Um, so intuitively, what do we want? We want each point, uh, Phi of X_i to be close to the centroid, right? For the centroid to be really representative like, of the points in that cluster, that centroid should be close to all the points in that cluster, okay? So this is captured by this objective function where I look at all the points. For every point, I measure the distance between that point and, um, the centroid that that point is associated with it. So remember z_i is a number between one 1 K. So that indexes which of the Mu, uh, Mu 1 or Mu 2 I'm talking about. I'm looking at the squared distance between those; two, the centroid and the point. Yeah? How does each point get assigned to a centroid? Yeah, how does each point get to, assigned to a centroid? So that's going to be specified by the z's which, um, is going to be optimized overall. A priori, you don't know. Yeah? The holes have a pretty good idea of how many labels they can support, I guess- How many clusters. -clusters it could be? Yeah, the question is do we know how many clusters there are? In general, no. So there are ways to select. It's another hyperparameter. So it's something that you have to set before you run the k means object function. So when you're tuning, you try different number of clusters and see which one kind of works better. Okay, so we need to choose the centroids and the assignments jointly. So though this- this hopefully is clear, you just want to find the assignment z and the centroids mu to make this number as small as possible. So how do you do this? Well, let's- let's look at a simple one-dimensional example, and let's build up some questions, okay? So we have 1d now, and we have four points and the points are at- they are going to be at 0, 2, 10, and 12, okay? So I have our points, four points at these locations. Okay, I want to cluster, and intuitively you think I want two clusters here. There's going to be two centroids. And suppose I know the centroids, okay? So just- someone told you magically that the centroids of this example is- are going to be like at 1 and 11, okay? So someone told you that and now you have to figure out the assignments. Yeah, how would you do this? Let's assign this point, where should it go? You look at this distance, which is one. You look at this distance, which is 11. Which are smaller? One is smaller. So you say, "Okay, that's where I should go." Same with this point, 1 is smaller, for these, 11 is smaller. And that's it, okay? So mathematically, you can see it's comparing the distance from each point to each of the centers and choosing the center which is closest, okay? And you can convince yourself that that's the way to- if the cluster centroids were- centroids would fix, how you would minimize the objective function. Because if you choose a centroid which is farther away, then you get just- a larger value and you want the value to be as small as possible, okay? I don't know why this is two. I think this should be one, right? Okay, so let's do it the other way now. Suppose I now have the assignments. So I know that these two should be in some cluster. These two should be in a different cluster, cluster two. And now I have to place the- the centers. Where- where should I place it? Should I place it here? Should I place it here? Should I place it here? Where should I place it? And if you look at the slide here, what you're doing is you're saying, "Okay, for the first cluster, I know 2 and 0 are assigned to that cluster. And I know that the sum of the distances to this- this centroid mu is this, and I want this number to be as small as possible." Okay? And if you did the first homework you know that whenever you have one of these kind of squared of some objectives, you should be averaging the points here. So you can actually solve that in closed form, and you- given the assignments here, you know the center should be there, which is average of 0 and 2. And for these- these cluster, you should average the two points here, and that should be at 11. Yeah. [inaudible]. Okay, so what's the difference between centroid and assignment? So when you're clustering, you have k clusters, so there's k centroids. So in this case there's two centroids. There- those are the- the red. The assignments are the association between the points and the centroid. So you have n assignments. And these are the things that move. Is the k a hyperparameter or is that somehow [OVERLAPPING]. Yeah, so k here is a hyperparameter, which is the number of clusters which you can turn. Okay, so here's a chicken and egg problem, right? If I knew the centroids, I could pretty easily come up with assignments. And if I knew the assignments, I could come up with the centroids. But I don't know either one. So how do I get started? So the key idea here is alternating minimization, which is this general idea in optimization which is usually not a bad idea. And the principle is well, you have a hard problem, maybe you can solve it by tackling kind of two easy problems here. So here's a k-means algorithm. So step one is you're going to- you're given the centroids, now you kind of go into more general notation, mu 1 through mu k. And I want to figure out the assignments. So for every data point, I'm going to assign that data point to the cluster with the closest centroid. So here I'm looking at all the clusters, 1 through k, and I'm going to test how far is that point from that centroid, and I'm just going to take the smallest value, and that's going to be where I assign that point, okay? Step two, flip it around. You're given the cluster assignments now, Z_1 through Z_n. And now we're trying to find the best centroids. So what centroids should I pick? So now you go through each cluster 1 through k, and you're going to set the centroid of the kth cluster to the average of the points assigned to the cluster, right? So mathematically this looks like that. You just sum over all the points i which have been assigned to cluster k, and you- you basically add up all the feature vectors. And then you just divide by the number of things you summed over, okay? So putting it together, if you want to optimize this objective function the K-means reconstructor and loss. First you initialize mu 1 through mu K randomly. There's many ways to do this. And then you just iterate, set assignments given the clusters, the centroids, and then set the centroids given the cluster assignments. Just alternate. Yeah. Yeah this makes sense for like coordinates, for like images, where like if you read in a similar image by bytes it looks the same, but like words, where words that are spelled totally differently can have like these same like semantic meanings. How would you accurately map them to like a same location to cluster essentially around? Yeah, so the question is like maybe for images distances in pixel space makes kinda more- more sense. But if you have words, then- two words which- you shouldn't be looking at it like the edit distance between, you know, the, the words and two synonyms like big and large, look very different, but they are somehow similar. So this is something that word vectors, you know address, which we're not going to talk about. Basically you want to capture the representation of a word by its context. So the contexts in which big and large occur is going to be kind of similar. And you can construct these context vectors that give you a better representation. We can talk more offline. Yeah. [inaudible] things where you can get stuck at like a local minima or you're guaranteed if you do it enough times [inaudible]. Yeah, you can get stuck and I'll show you an example. Any other questions about the general algorithm? Yeah. Unstable in that they say you get stuck, and then like you like kind of [inaudible] multiple. Yeah, I'll- maybe I'll- I'll answer that. I'll show you an example. Make sure you show using a fixed number of iterations, but some kind of criteria like doesn't change anymore as the stopping condition? Yeah, so this is going up to a fixed number of iterations t. Typically, you would have some sort of- you would monitor this objective function. And once it gets below, stops changing very much, then you just stop. Actually, this is that the k-means algorithm is guaranteed to always converge to a local minimum. So why don't I just show you this demo, and I think it'll maybe make some things clear. Okay, so here I have a bunch of points. So this is a JavaScript demo. You can go and play around and change the points if you want. It's linked off the course website. And then I'm going to run K-means. Okay, so I initialize with these three centroids, and these regions are basically the points that would be assigned to that centroid. So this is a Voronoi diagram of these- these centroids. Okay, and this is the loss function which will hopefully should be going down. Okay, so now I iterate- so iteration one, I'm going to assign the points to the clusters. So these get assigned to blue, this one gets assigned to red, these get assigned to green. And then the step two is going to be optimizing the centroids. So given all the blue points, I put the center in the smack in the middle of these blue points. And then same with green and red. Notice that now these points are in the red region. So if I reassign, then these become red, and then I can iterate, and then, you know, keep on going, and you can see that the algorithm, you know, eventually converges to clustering, where these points are blue, these points are red, and these are green. And if you keep on running it, you're not going to make any progress, because if assignments don't change, then the cluster centers aren't going to change either. Okay. Um, so let me actually, you know, skip this since I'm- I was just gonna do it on the board but I think you kind of get the idea. Um, so let's talk about this local minima problem. So K-means is not guarantee is, is, is guaranteed to converge to a local minimum, um, but it's not guaranteed to find the global minimum. So if you think about this as a coy visualization of the objective function, you know, by going downhill, we can get stuck here but it won't get to that point. So you- so you take an example for different random seeds. You can- let's say you initialize here. Okay, so now all the three centers are here and if I run this and I run this, now I get this other solution which is actually a lot worse. Remember the other one was 44 and this is 114, and that's where the algorithm converged and you're just stuck. So in practice, people typically try different initializations, run it from different random points and then just take the best. Um, there's also a particular way of initialization called K-means plus plus where you put down a point and you put down a point which is as farthest away as possible and then as far away as possible. And then that kind of spreads out the centers. So they don't, kind of, inter- interfere with each other and that generally works pretty well. But still there's no necessary guarantee of converging to a global optimum. Okay, any questions about K-means? Yeah. [inaudible] How do you choose K? You guys love these hyper-parameter tuning questions. Uh, so, uh, one thing you can, kind of, draw is the following picture. Um, so K then your loss that you get from K. And usually, if you have one cluster, the loss is gonna be very high and that at some point, it's, you know, going to go down and you generally, uh, you know, lop it off when it's, you know, not going down by very much. So you can monitor that curve. Another thing you can do is you have a validation set, um, and you can measure reconstruction error on the, you know, validation set and choose the minimum based on that, which is just another hyper-parameter that you can turn. Yeah. How is the training loss calculated [inaudible] How's the training loss calculated? Uh, so the training loss is this quantity. Um, so you sum over all your points and then you look at the distance between that point and the sign centroid and you square that and you just add all those numbers up. Okay. So to wrap up, um, oh actually I have- actually, I have more slides here. [LAUGHTER] So, um, unsupervised learning you're trying to leverage a lot of data and we can, kind of, get around this difficult optimization problem by, you know, doing this alternating minimization. So these will be quick. Um, so just to, kind of, summarize the learning section, we've talked about feature extraction. And I want you to think about the hypothesis class that's defined by a set of features. Um, prediction which boils down to kind of what kind of model you're looking at for classification and regression. Supervised learning, you have linear models and neural networks, and for clustering you have a K-means object- objective loss functions which, you know, in many cases all you need to do is compute the gradient. Um, and then there's generalization which is what we talked about for the first half of this lecture which is really important to think about. You know, the task set remember is, kind of, only a surrogate for future examples. Um, so a lot of these ideas that we presented are actually quite old. So the idea of least squares, you know, the- for regression goes back to, you know, Gauss when he was, you know, solve- trying to solve some astronomy problem. Logistic regression was, you know, from statistics. For an AI, there was actually some learning that was done even in the, you know, in the '50s for playing checkers. As I mentioned, the first day of our class, there was a period where learning kinda fell out of favor but it came back with back-propagation and then much of the '90s actually a lot more, kind of, rigorous treatment of optimization and formalization of when algorithms are guaranteed to converge um that- that happened in the '90s. And then in the 2000s, we know that people looked at kind of structure prediction and, um, there was a revival of neural networks. Um, some things that we haven't covered here are, you know, feedback loops, right? So learning assumes kinda the static view where you take data. You train a model and then you go and generate predictions. But if you deploy the system in the real world, those predictions are actually gonna come around and beat data. And those feedback loops can also cause problems that you might not be aware of if you're only thinking about, ah, here's- I'm doing my machine-learning thing. How can you build classifiers that don't discriminate? So, um, we, uh, often have classifiers, you're minimizing the training set- average of the training set. So by- by a kind of construction, you're trying to drive down the losses of, you know, kind of common examples. But often you get these situations where minority groups actually get, you know, pretty high loss because they look different and almost look like outliers but you're not really able to fit them. But, um, the training loss doesn't kind of, you know, care. So there's other ways. Um, there's techniques like distribution and robust optimization that tries to, um, you know, get around some of these issues. Um, there's also privacy concerns. How can you learn actually if you don't have access to an entire dataset? So there are some techniques based on randomization that can help you. And then interpretability, how can you understand what, you know, the algorithms are doing especially if you have a deep neural network. You've learned a model and there's, you know, work which I am happy to discuss with you offline. So the general- so we've concluded three lectures on machine learning. Um, but I wanted you, kind of, to think about learning in the most general way possible, which is that, you know, programs should improve with, you know, experience. Right. So I think we've talked about, you know, linear classifiers and all these kind of nuts and bolts of basically reflex models. But in the next lectures, we're gonna see how learning can be used in state-based models and also, you know, variable-based models. Okay. With that, so that concludes. Um, next week, Dorsa will be giving the lecture on state-based models. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Game_Playing_2_TD_Learning_Game_Theory_Stanford_CS221_Artificial_Intelligence_Autumn_2019.txt | Let's start guys. Okay, so, uh, we're gonna continue talking about games today. Uh, just a quick announcement, the project proposals are due today. I think you all know that. Um, all right, let's co- Tomorrow. Tomorrow. You're right [LAUGHTER]. Tomorrow [LAUGHTER] Just checking. [LAUGHTER] Yeah. Today is not Thursday. Yeah. [LAUGHTER] Tomorrow. For a second, I thought it's Thursday. Um, all right, so let's talk about games. Uh, so we started talking about games last time. Uh, we formalized them. Uh, we talked about, uh, non- we talked about zero-sum two-player games that were turn-taking, right? And we talked about a bunch of different strategies to solve them, like the minimax strategy or the expectimax strategy. Uh, and today we wanna talk a little [NOISE] bit about learning in the setting of games. So what does learning mean? How do we learn those evaluation functions that we talked about? And then, er, towards the end of the lecture, we wanna talk a little [NOISE] bit about variations of the game- the games we have talked about. So, uh, how about if you have- how about the cases where we have simultaneous games or non-zero-sum games. So that's the, that's the plan for today. So I'm gonna start with a question that you're actually going to talk about it towards the end of the lecture, but it's a good motivation. So, uh, think [NOISE] about a setting where we have a simultaneous two-player zero-sum game. So it's a two-player zero-sum game similar to the games we talked about last time, but it is simultaneous. So you're not ta- ta- taking turns, you're playing at the same time. And an example of that is rock, paper, scissors. So can you still be optimal if you reveal your strategy? So lets say you're playing with someone. If you tell them what your strategy is, can you still be optimal? That's the question. Yes. [inaudible] It's a small [NOISE] enough game space for- if they know exactly [NOISE] what you're going to play, [NOISE] you won't be successful if you- for a zero-sum real-time simultaneously being the larger scale, I think you could still be successful if that approach is like superior to the other approach taken. [NOISE] So it's not- so, so, so the answer was about the size of the game. So rock, paper, scissors being small versus, versus not being small. So, so the question is more of a motivating thing. We'll talk about this in a lot of details towards the end of the class. It's actually not the size that matters. It's the type of strategy that you play that matters, so just to give you an idea. But, like, the reason that we have put this I guess at, at the beginning of the lecture is intuitively when you think about this, you might say, "No. I'm not gonna tell you what my strategy is, right? Because if I say, I'm gonna play it, like, scissors, you'll know what to play." But th- this has an unintuitive answer that we are gonna talk about towards the end of the lecture. So just more of a motivating example. Don't think about it too hard. All right. So, so let's do a quick review of games. So, um, so last time we talked about having an agent and opponent playing against each other. So, uh, and we were playing for the agent, uh, and the agent was trying to maximize their utility. So they were trying to get this utility. The example we looked at was, uh, agent is going to pick bucket A, bucket B, or bucket C. And then the opponent is going to pick a number from these buckets. They can either pick minus 50 or 50, 1 or 3 or minus 5 or 15. And then if you want to maximize your, your utility as an agent, then you can potentially think that your opponent [NOISE] is trying to, trying to minimize your utility, and you can have this minimax game, kind of, playing against each other and, and, and based on that, uh, decide what to do. So we had this minimax tree and based on that, the utilities that are gonna pop up are minus 50, 1 and minus 5. So if your goal is to maximize your utility, you're gonna pick bucket B, the second bucket, because that's the best thing you can do, assuming your opponent is a minimizer. So, so that was kind of the setup that we started looking at. And the way we thought about, uh, solving this game by- was by writing a recurrence. So, so we had this value. This is V which was the value of a minimax, uh, at state S. And if you're at the utility, er, so if you're an- at an end state, we are gonna get utility of S, right? Like if you get to the end state, we get the utility because we get the utility only at the, at the very end of the game. And if the agent is playing, we- the recurrence is maximize V of the successor states. And if the opponent is playing, you wanna minimize the value of the successor states. And so that was the recurrence we started with, and, and we looked at games that were kind of large like the game of chess. And if you think about the game of chess, the branching factor is huge. The depth is really large. It's not practical to u- to do the recurrence. So we, we started talking about ways to- for speeding things up, and, and one way to speed things up was this idea of using an evaluation function. So do the recurrence but only do it until some depth. So don't go over the full tree. Just do it until some depth, and then after that, just call an evaluation function. And hopefully your evaluation function which is kind of this weak estimate of your value is going to work well and give you an idea of what to do next. Okay. So, so instead of the usual recurrence, what we did was we decided to add this D here, um, this D right here which is the depth that un- until which we are exploring. And then we decrease the value of depth, uh, after an agent and opponent plays. And then when depth is equal to 0, we just call an evaluation function. So intuitively if you're playing chess, for example, you might think a few steps ahead, and when you think a few steps ahead, you might think about how the board looks like and kind of evaluate that based on the features that, that, that board has and based on that, you might, you might decide to take various actions. So similar type of idea. And then the question was, well, how are we gonna come up with this evaluation function? Like where is this evaluation function coming from? Uh, and, and then one idea that, that we talked about last time was it can be handcrafted. The designer can come in and sit down and figure out what is a good evaluation function. So in the game of chase- che- and chess example is, you have this evaluation function that can depend on the number of pieces you have, the mobility of your pieces. Maybe the safety of your king, central control, all these various things that you might care about. So the difference between the number of queens that you have and your opponents number of queens, these are things, these are features that you care about. And, and potentially, a designer can come in and say, "Well, I care about nine times more than I care about how many pawns I have." So, so the hand- like you can actually hand-design these things and, and write down these weights about how much you care about these features. Okay. So I'm using terminology from the learning lecture, right? I'm saying we have weights here and we have features here, and someone can come and just handcraft that. Okay. Well, one other thing we can do is instead of handcrafting it, we could actually try to learn this evaluation function. So, so we can still handcraft the features, right? We can still say, "Well, I care about the number of kings and queens and these sort of things that I have, but I don't know how much I care about them. And I actually wanna learn that evaluation function. Like what the weights should be." Okay. So to do that, I can write my evaluation function, eval of S, as, as this V as a function of state parameterized by, by weights Ws. And, and my goal is to figure out what these Ws, what these weights are. And ideally I wanna learn that from some data. Okay. So, so we're gonna talk about how learning is applied to these game settings. And specifically the way we are using learning for these game settings is to just get a better sense of what this evaluation function should be from some data. Okay. So, so the questions you might have right now is, well, how does V look like? Where does my data come from? Because if I, if you know where your data comes from and your, your V is, then all you need to do is to come up with a learning algorithm that takes your data and tries to figure out what your V is. So, so we're gonna talk about that at the first part of the lecture. Okay. And, and that kind of introduces to this, this, um, temporal difference learning which we're gonna discuss in a second. It's very similar to Q-learning. Uh, and then towards the end of the class, we will talk about simultaneous games and non-zero-sum games. Okay. All right. So, so let's start with this V function. I just said, well, this V function could be parameterized by a set of weights, a set of w's, and the simplest form of this V function is to just write it as a linear classifier as a linear function of a set of features, w's times Phi's. And these Phi's are the features that are hand-coded and someone writes them. And then- and then I just want to figure out what w is. So this is the simplest form. But in general, this, this V function doesn't need to be a linear classifier. It can actually be any supervised learning model that we have discussed in the first few lectures. It can be a neural network. It can be anything even more complicated than neural network that just does regression. So, so we can- basically, any model you could use in supervised learning could be placed here as, as, as this V function. So all I'm doing is I'm writing this V function as a function of state and a bunch of parameters. Those parameters in the case of linear classifiers are just w's and in the case of the neural network, there are w's and these v's in this case of what one layer neural network. Okay. Or multilayer, actually. Yeah, one way. All right. So let's look at an example. So let's think about an example and I'm going to focus on the linear classifier way of looking at this just for simplicity. So, um, okay, let's pick a game. So we're going to look at backgammon. So this is a very old game. Uh, it's a two-player game. The way it works is you have the red player and you have the white player, and each one of them have these pieces. And what they wanna do is they want to move all their pieces from one side of the board to the other side of the board. It's a game of chance. You can actually, like, roll two dice and based on the outcome of your dice, you move your pieces various, various amounts to, to various columns. Uh, there are a bunch of rules. So your goal is to get all your pieces off the board. But if you have only, like, one piece and your opponent, like, gets on top of you, they can push you to the bar and you have to, like, start again. Um, there are a bunch of rules about it. Read it, read about it on Wikipedia if you're interested. But you are going to look at a simplified version of it. So in this simplified version, I have Player O and player X, and I only have four columns. I have column 0, 1, 2, and 3. And in this case, I have four of each one of these players and, and the idea is, we want to come up with features that we would care about in this game of backgammon. So, so what are some features that you think might be useful? Remember the learning lecture. How did we come up with, like, feature templates? Yes. Currently, still bound with the [inaudible]. So maybe like the location of the X's and O's. The number of them. Yeah. Yeah. So one idea is you have all this knowledge about the board, so maybe we should, like, care about the location of the X's. Maybe we should care about like where the O's are, how many pieces are on the board, how many pieces are off the board. So similar type of way that we- we've come up with features in the first few lectures. We were basically, we would do the same thing. So a feature template- set of feature templates could look like this, like, number of X's or O's in column- whatever column being equal to some value or, uh, number of X's or O's on the bar. Maybe fraction of X's or O's that are removed, whose turn it is. So these are all like potential features that we could use. So for this particular board, here are what those features would look like. So if you look at number of O's in column zero 0 to 1, that's equal to 1. Remember we were using these indicator functions to be more general. So, so like here, again, we are using these indicator functions. You might ask number of O's on the bar that's equal to 1, fraction of O's that are removed. So I have four pieces. Two of them are already removed. So that's one-half. Number of X's in column 1 equal to 1, that's 1. Number of X's in column 3 equal to 3, that's 1. It's O's turn. So that's equal to 1. Okay. So, so we have a bunch of features. These features, kind of, explain what the sport looks like or how good this board is. And what we wanna do is we wanna figure out what, what are the weights that we should put for each one of these features and how much we should care about, uh, each one of these features. So, so that is the goal of learning here. Okay. All right. So okay. So, so that was my model. All right. So far, I've talked about this V S of w. I'm- I've defined it as a linear classifier- as a linear predictor. W's times features. And now, the question is where do I get data? Like where and because if I'm doing learning, I got to get data from somewhere. So, so one idea that we can use here is we can try to generate data based on our current policy pi agent or pi opponent, which is based on our current estimate of what V is. Right. So currently, I might have some idea of what this V function is. It might be a very bad idea of what V is, but that's okay. I can just start with that and starting with, with that V function that I currently have, what I can do is I can, I can call arg max of V over successors of s and a to get a policy for my agent. Remember this was how we were getting policy in a mini-max setting. Policy for the opponent is just argument of that V function and then when I call these policies, I get a bunch of actions. I get a sequence of, like, states based on, based on how we are following these policies, and that is some data that I can actually go over and try to make my V better and better. So, so that's kind of how we do it. We call these policies. We get a bunch of episodes. We go over them to make things better and better. So, so that's, kind of, the key idea. Um, one question you might have at this point is, um, is this deterministic or not, like, do I need to do something like Epsilon-Greedy. So in general, you would need to do something like Epsilon-Greedy. But in this particular case, you don't really need to do that because we have to get- we have this die that, that you're actually rolling the dice. And by rolling the dice, you are getting random different- different random path that, that we might take- so that might take us to different states. So we, kind of, already have this, this element of randomness here that does some of the exploration for us. And you just mean like unexplored probability? Yes. So my Epsilon-Greedy, what I mean here is do I need to do extra exploration? Am I gonna get stuck like in a particular set of states if I don't do exploration? And in this particular case, because we have this randomness, we don't really need to do that. But in general, you might imagine having some sort of Epsilon-Greedy to take us explore a little bit more. Okay. So then we generate episodes and then from these episodes, we want to learn. Okay. These episodes look like state action reward states and then they keep going until we get a full episode. One thing to notice here is, is the reward is going to be 0 throughout the episode until the very end of- end of the game. Right. Until we end the episode and we might get some reward at that point or we might not. Uh, but, but the reward throughout is going to be equal to 0 because we are playing a game. Right. Like we are not getting any rewards at the beginning. And if you think about each one of these small pieces of experience; s, a, r, s prime, we can try to learn something from each one of these pieces of experience. Okay. So, so what you have is you actually go on board maybe. What you have here is you have a piece of experience. Let's call it s, a. You get some reward. Maybe it is 0. That's fine if it is 0. And you go to some s prime through that. So s, take an action, you get a reward. Maybe you get a reward. You go to some s prime from that and you have some prediction. Right. Your prediction is your current, like, your current, um, V function. So your prediction is going to be this V function and add state s parameterized with W. And this is what you already, like, you, kind of, know right now. This, this is your current estimate of what V is. And this is your prediction. I'm writing the prediction as a function of w. Right. Because it depends on w. And then we had a target that you're trying to get to. And my target, which is kind- kind of acts as a label, is going to be equal to my reward, the reward that I'm getting. So it's kind of, the reward- so if you look at this V of s and w, well, it's kind of close-ish to reward plus, I'm gonna write discount factor, Gamma V of s prime, w. All right. So, so my target the thing that I'm trying to like get to is the reward plus Gamma V of s prime, w, okay? So we're playing games, in games Gamma is usually 1. I'm gonna keep it here for now but I'm gonna drop it at some point, so you don't need to really worry about Gamma. And then one other thing to notice here is, I'm not writing target as a function of w because target acts kind of like my label, right? If I'm, if I'm trying to do regression here, target is my label, it's kind of the ground truth thing that I'm trying to get to. So I'm gonna treat my target as just like a value, I'm not writing it as a function of w, okay? All right. So, so what do we try to do usually, like when you are trying to do learning? We have prediction, we have a target, what do I do? Minimize the- your error. So what is error? So I can write my error as potentially a squared error. So I'm gonna write one-half of prediction of w, minus target squared, this is my squared error. I want to minimize that. So with respect to w, okay? How do I do that? I can take the gradient. What is the gradient equal to? This is simple, right? 2 reduced, 2 gets canceled. Gradient is just this guy, prediction of w, minus target, times the gradient of this inner expression. The gradient of this inner expression with respect to w is the gradient of prediction with respect to w minus 0 because target is, I'm treating it as a number, okay? Let me move this up. So now I have the gradient. What algorithm should I use? I can use gradient descent. All right. So I'm going to update my w. How do we update it? I'm gonna move in the negative direction of my gradient using some learning rate Eta, uh, times my gradient. My gradient is prediction of w minus target times gradient of prediction of w with respect to w. All right. So that's actually what's on this slide. So the objective function is prediction minus target squared. Gradient, we just took that, it's prediction minus target times gradient of prediction. And then the update is just this, this particular update where we move in the negative direction of the gradient. This is, this is what you guys have seen already, okay. All right. So so far so good. Um, so this is the TD learning algorithm. This is all it does. So temporal difference learning, what it does is it picks like these pieces of experience; s, a, r, s prime, and then based on that pieces of experience, it just updates w based on this gradient descent update, difference between prediction and target times the gradient of V, okay? So what, what happens if I have, if I have this, this linear function, maybe let me write- let me write this in the case that I have a linear, linear function. So what if my V of sw is just equal to w dot phi of s, yeah phi of s. So what happens to my update? Minus Eta. What is prediction? w dot phi of s, right? w dot phi of s. What is target? We defined up it there, it's the reward you're getting- the immediate reward you're getting plus Gamma times V of s prime, w, which is w dot phi of s prime times gradient of your prediction which is what, phi of s, okay? So I just, I just wrote up this indicates of a linear predictor. Yes. With Q learning, what are the differences between the two? Yeah, so this is very similar to Q learning. There are very minor differences that you'll talk about actually at the end of this section, comparing it to Q learning. All right. So, so I wanna go over an example, it's kind of like a tedious example but I think it helps going over that and kind of seeing why it works. Especially in the case that the reward is just equal to 0 like throughout an episode. So it kinda feels funny to use this algorithm and make it work but it works. So I want to just go over like one example of this. So I'm gonna show you one episode starting from S1 to some other state. And, and I have an episode I start from some state, I get some features of that state. Again, these features are by just evaluating those han- hand coded features. And I'm just going to start, what w should they start with? 0, let me just initialize w to be equal to 0, okay, right? How do I update my w? Me- let me let me just write it in this. So, so this is I want to write it in a simple for- not a simpler form but just another form. So w the way we're updating it is, the previous w minus Eta times prediction minus target, I'm gonna use p and t for prediction minus the target, times phi of s. Okay, this is the update you're doing, okay? Uh, yeah, that's right. Okay. So, so what is my prediction? What is my prediction? w dot t of s? 0. What is my target? So for my target I need to know what state I'm ending up at. I'm gonna end up at 1, 0 in this episode and I'm gonna get a reward of 0. So what is my target? My target is reward, which is 0, plus w times phi of s prime, that is 0 because w is equal to 0. So my target is equal to 0. My p minus t is equal to 0. So p minus t is equal to 0, this whole thing is 0, w stays the same. So in the next kind of step, w is just 0, okay? I'm gonna move forward. Um, so what is prediction here? 0 times 0, prediction is 0. What is target? I haven't done 0 because I haven't got any- anything, any reward yet, where do I end at? I end up at 1, 2. So yeah, so target is going to be a reward, which is 0 plus 0 times, whatever state of phi of s prime that I'm at, so that's equal to 0. p minus t is equal to 0, it's kind of boring [LAUGHTER]. So at this point, w hasn't changed, w is equal to 0. What is my prediction? Prediction is equal to 0, that's great. What is target equal to? So I'm gonna end up in an end state where I get 1, 0 and I get a reward of 1. So this is the first time I'm getting a reward. What should my target be? My target is reward 1 plus 0 times 1, 0 which is 0, so my target is 1. So what this tells me is, I'm predicting 0 but my target is 1, so I need to push my w's a little bit up to actually address the fact that this is, this is, this is equal to 1. So p minus t is equal to minus 1. So I need to do an update. Maybe I, I'll do that update here. So how am I updating it? So I'm doing, starting from 0, 0 minus, uh, my Eta is 0.5, that's what I allowed it- like I put it- I defined it to be, my prediction minus target is minus 1. What is phi of s, phi of s is 1, 2, right? So what should my new w be? What is that equal to? 0.5 and then 1. All right, so I'm just doing arithmetic here. So my due- new w is going to become 0.5 and 1 at the end of this one episode. So I just did one episode, one full episode, where w is worth 0 throughout and then at the very end when I got a reward, then I updated my w because I realized that my prediction and target were not the same thing, okay? So now I'm gonna, I'm gonna start a new episode and the new episode I'm starting is going to start with this particular w, and in the new episode even though the rewards are going to be 0 throughout, so like we are actually going to update our w's. Yes, question? If you use, uh, two questions. If you use like, uh, initialize rates do not be zeros which you update throughout instead of just to the end. Yeah. Okay and section two, so S4 and S9 are the same future of activities but you said S4 is S9 [OVERLAPPING]. Uh, this is a made up example, [LAUGHTER] so don't think about this example too much though. Well, is it that possible to have, an end state and not end state have the same feature vector, or no? If you have the same feature vector in the same state- It, it is possible to have, yeah, the, the most of the states to have the same features, right. You could have, like I said up here. Depends on what sorts of feature, you can could, could use like really not representative features. Like if you really want S4 and s- S9 to, to differentiate between them, you should pick features that differentiates between them. But if there were kind of the same and have the same sort of characteristics, it's fine to have feature that gives the same value. Like, like we have different [inaudible]. As one, uh, entry that's always isn't [inaudible] like instead of 1, 2, we have 1, 0 leading to the, the final weight then the weight corresponding to that. Is going to- [OVERLAPPING] Yeah. It will never converge. And that kind of tells you that that entry in your feature vector, you don't care about that, or it's always, like, it, it's always staying the same. If it is always 0, it doesn't matter like what the weight of that entry is. So in general, you wanna have features that are differentiating and, and you're using it in some way. So for the second row, I'm not gonna write it up cause that takes time. [LAUGHTER] So, uh, so okay, so let's start wi- with a new episode. We started S1 again but now I'm starting with this new W that I have. So I can compute the prediction, the prediction is 1. I can compute my target it's 0.5. And what we realize here is we overshoot it. So before, our prediction was 0, target was 1, we are undershooting. We fix our Ws, but now we're overshooting. So we need to fix that. Yes. Uh, a little verification on the relationship between the features and the weights. Uh, they always have to be the same dimension, and what should we be thinking about that would make a good feature for updating the weights specifically, like- So, uh, okay so first off, yes, they need to be always in the same- in dimension cause you are doing this, um, dot-product between them. Um, the feature selection, um, you don't necessarily think of it as, like how am I updating the weights, you think of the feature selection as is it representative of how good my board is. Is it, for example in the case of Backgammon, or is it representative of, uh, how good I am navigating, uh, so, so it should be a representation of how good your state is, and then it's- yeah, it's usually like hand designed, right. So, so i- i- it, it's not necessarily- you shouldn't think of it as how is it helping my weights, you should think of it as how is it representing how good my state is. How is that also, like, thinking of the blackjack example, if you have a threshold of 21 and then you have a threshold of 10, uh, if you're using the same feature extraction for both, how does that affect the generalized ability of the model, the agent? Yeah, so, so you might choose two, two different features and one of them might be more like so, so there is kind of a trade-off, right? You might get a feature that actually differentiates between different states very well, but then that, that makes learning longer, that makes it not as generalizable, and then at the end- on the other hand, you might get a feature that's pretty generalizable but, but then it might not do these specific things that you would wanna do or these differentiating factors about it. So, so picking features, it's, it's an art, right, so. [LAUGHTER] All right. So lemme, lemme move forward cause we have a bunch of things coming up. Okay, so I'll go over this real quick then. So we have the W's, right. So, so we now update the W based on this new value, um, and kind of similar thing, you have a prediction, you have a target, you're still overshooting, so, so you still need to update it. And then once you update it to 0.25 and 0.75 then it kind of stays there, and you are happy. Okay. All right so, so this was just an example of TD learning but this is the update that you have kind of already seen, right? And then a lot of you have pointed out that this is, this is similar to Q-learning already, right? This is actually pretty similar to update, um, it's, it's very similar, like we have these gradients, and, and the same weight that we have in Q-learning. And, and we are looking at the difference between prediction and target, same weight that we are looking at in Q-learning, but there are some minor differences. So, so the first difference here is that Q-learning operates on the Q function. A Q function is a function over state and actions. Here, we are operating on a value function, right? On V. And V is only a function of state, right? And, and part of that is, is actually because in the setting of- in setting of a game, you already know the rules of the game. So we kind of already know the actions. You don't need to worry about it as much the same way that if you are worrying about it in Q-learning. The second difference is, Q-learning is an off-policy algorithm. So, so the value is based on this estimate of the optimal policy which is this Q opt, right? It's based on this optimal policy. But in the case of TD learning, it's an on-policy, the value is based on this exploration policy which is based on a fixed Pi, and sure you're updating the Pi, but you're going with whatever Pi you have and, and, and kind of running with that and keep updating it. Okay, so that's another difference. And then, finally like in Q-learning, you don't need to know the MDP transitions. So you don't need to know this transition function as transition from s, a to s-prime. But in the case of TD learning, um, you need to know the rules of the game. So you need to know how the successor function of s and a works. Okay. So, so those are some kind of minor differences, but from like a perspective of, like how the update works, it is pretty similar to what Q-learning is, okay? All right. So, so that was kind of this idea of, I have this evaluation function, I wanna learn it from data, I'm going to generate data from that generated data I'm going to update my W's. So, so that's what we've been talking about so far. And the idea of learning- using learning to play games is, is not a new idea actually. So, um, so in '50s, um, Samuel looked at a checkers game program. So where he wa- he was using ideas from self-play and ideas from like similar type of things we have talked about, using really smart features, using linear evaluation functions to try to solve the checkers program. So a bunch of other things that he did included adding intermediate rewards. So, so kind of throughout, like the to, to get to the endpoint, he added some intermediate rewards, used alpha-beta pruning and some search heuristics. And then, he was kind of impressive, like what he did in '50s, like he ended up having this game that was playing, like it was reaching, like human ama- amateur level of play and he only used like 9K of memory which is like really impressive [LAUGHTER] if you're thinking about it. So, so this idea of learning in games is old. People have been using it. In the case of Backgammon, um, this was around '90s when Tesauro came up with, with an algorithm to solve the game of Backgammon. So he specifically used, uh, this TD lambda algorithm, which is similar to the TD learning that we have talked about. It, it has this lambda temperature parameter that that kinda tells us how good states are, like as they get far from the reward. Uh, he didn't have any, any intermediate rewards, he used really dumb features, but then he used neural networks which was, uh, kind of cool. And he was able to reach human expert play, um, and kind of gave us- and this kind of ga- gave us some insight into how to play games and how to solve, like these really difficult problems. And then more recently we have been looking at the game of Go. So in 2016, we had AlphaGo, uh, which was using a lot of expert knowledge in addition to, um, ideas from a Monte Carlo tree search and then, in 2017, we had AlphaGo Zero, which wasn't using even expert knowledge, it was all, like, based on self-play. Uh, it was using dumb features, neural networks, um, and then, basically the main idea was using Monte Carlo tree search to try to solve this really challenging difficult problem. So, um, I think in this section we're gonna talk a little bit about AlphaGo Zero too. So if you're attending section I think that will be part of that story. All right so the summary so far is, we have been talking about parameterizing these evaluation functions using, using features. Um, and the idea of TD Learning is, is to look at this error between our prediction and our target and try to minimize that error and, and find better W's as we go through. So, um, all right so that was learning and, and games. Uh, so now I wanna spend a little bit of time talking about, uh, other variations of games. So, so the setting where we take our games to simultaneous games from turn-based. And then, the setting where we go from zero-sum to non-zero-sum, okay? All right. Okay simultaneous games. So, um, all right so, so far we have talked about turn-based games like chess where you play and then next player plays, and you play, and next player plays. And Minimax sca- strategy seemed to be pretty okay when it comes to solving these turn-based games. But not all games are turn-based, right? Like an example of it is rock-paper-scissors. You're all playing at the same time, everyone is playing simultaneously. The question is, how do we go about solving simultaneously, okay? So let's start with, um, a game that is a simplified version of rock-paper-scissors. This is called a two-finger Morra game. So the way it works is, we have two players, player A, and player B. And each player is going to show o- either one finger, or two fingers, and, and you're playing at the same time. And, and the way it works is, is if both of the players show 1 at the same time, then player B gives two dollars to player A. If both of you show 2 at the same time, player B gives Player A four dollars. And then, if, if you show different numbers like 1 or 2, or 2 or 1, then player A has to give o- give three dollars to, to player B. Okay? Does that make sense? So can you guys talk to your neighbors and play this game real quick?[BACKGROUND] All right, so, so what was the outcome? [LAUGHTER] How many of you are in the case where A chose 1, then- and B chose 1? Oh, yeah one. Okay, one pair here. Uh, A chose 1, B chose 2? One pair there, is it like four people played. So A chose 2, B chose 1. We have, okay two pairs. And then 2 and 2? Okay. All right. So, so you can kind of see like a whole mix of strategies here happening. And this is a game that you are gonna play and talk about it a bit and think about what would be a good strategy to use when you are solving this, this simultaneous game. Okay. All right so, um. All right so let's formalize this. We have player A and player B. We have these possible actions of showing 1 or 2. And then, we're gonna use this, this payoff matrix which, which represents A's utility. If A chooses action A and B chooses action B. So, so before we had this, this value function, right? Before, we had this value function, uh, over, um, over our state here. Now, we have this value function that is- do we- we shall use here, I'll just use here. That is again from the perspective of agent A. So remember like before, when we were thinking about value function, we are looking at it from the perspective of the first player, the maximizer player, the agent. Now, I'm looking at all of these games from the perspective of a player. So, so I'm trying to like get good things for A. Yes. In this case it's not at the end [inaudible] ? Uh, yeah. And then this is like a one-step game too, right? So, so like you're just playing and then you see what you get. So, so we're not talking about repeated games here. So, so you're playing, you see what happens, okay? So, so we have this V, which is V of a and b. And, and this basically represent a's utility if agent A plays a and if agent B plays b. Okay? And this is called, and, and you can represent this with a matrix and that's why it's called a pay-off matrix. I'm going to write that pay-off matrix here. So pay-off matrix. I'm gonna write A here, B here. agent A can show 1 or can show 2. agent B can show 1 or can show 2, right? If both of us show 1 at the same time, agent A gets $2. If both of us show 2 at the same time, agent A gets $4. Otherwise agent A has to pay, so agent A gets minus $3. And again the reason I only like talk about one way is we are still in the setting of zero-sum games. So whatever the agent A gets, agent B gets negative of that, right? So, so if agent A gets $4, agent B is, is paying minus $4. So I am just writing 1B from perspective of agent A. And this is called the pay-off matrix, okay? All right. So, uh, so now we need to talk about what does a solution mean in this setting? So, so what is a policy in the setting? And, and then the way we refer to them in this case are as strategies. So we have pure strategy which is almost like the same thing as, uh, as deterministic policies. So a pure strategy is just a single action that you decide to take. So, so you have things like pure strategies, uh, pure strategies. The difference between pure strategy and, and deterministic policies, if you remember, a deterministic policy again is a function of state, right? So, so it's a policy as a function of state. It gives you an action. Here we have like a one move game, right? So it's just that one action and we call it pure strategy. [NOISE] We have also this other thing that's called mixed strategy which is equivalent to, to stochastic policies. And what a mixed strategy is, is, is a probability distribution that tells you what's the probability of you choosing A. So, so pure strategies are just actions a's. And then you can have things that are called mixed strategies and they are probabilities of, of choosing action a, okay? All right. So here is an example. So if, if you say, well, I'm gonna show you 1, I'm gonna always show you 1. Then the- if you can, you can write that strategy as a pure strategy, that says I'm gonna always with probability of 1 show you 1 and with probability 0 show you 2. So, so let's say the first column is for showing 1, the second column is for showing 2. So, so this is a pure strategy that says always I'm going to show you 1. If I tell you, well, I always I'm gonna show you 2, then I can write that strategy like this, right? With probability 1, I'm always showing you 2`. I could also come up with a mixed strategy. Mixed strategy would be I'm going to flip a coin and if I get one-half, I'm gonna give you- uh, if I'm- if I get heads, I'm gonna show you one, if I get tails, I'm gonna show you two. And then you can write that as this and this is going to be a mixed strategy. You could only pull it out to like you're in the si- simultaneous game, you could just bring chance in and be like half the time, I'm gonna show you one, half the time I'm gonna show you two based on chance, okay? Everyone happy with mixed strategies and pure-strategies? All right. So, so how do we evaluate the value of the game. So, so remember in, uh, previous lecture and like in the MDP lecture even, we were talking about evaluating. If someone gives me the policy, how do I evaluate how good that is? So the way we are evaluating that is again by this value function V. And, and we are gonna write this value function as a function of Pi A and Pi B. Maybe I'll just write that up here. Or I'm gonna erase this 'cause this is a repetitive. So I'm gonna say a value of agent A following Pi A and agent B following Pi B, what is that equal to? Well, that is going to be the setting where, uh, Pi A chooses action A, Pi B chooses action B times value of choice A and B, summing over all possible a and bs. Okay. So, so let's look at an actual example for this. So, so for this particular case of Two-finger Morra game, let's say someone comes in and says I'm gonna tell you what Pi A is. Policy of agent A is just to always show one. And policy of agent B is this, this mixed strategy which is half the time show one, half the time show, show two. And then the question is, what is the value of, of these two policies? How do we compute that? [NOISE] Well, I'm gonna use my payoff matrix, right? So, so 1 times 1 over 2 times the value that we get at 1, 1, which is equal to 2. So it's 1 times 1, 1 over 2 times 2 plus 0 times 1 over 2 times 4 plus 1 times 1 over 2, times minus 3, the value that I get is minus 3 plus ah, 0 times 1 over 2 times minus 3. Okay? And, well, what is that equal to? What is that equal to? There are two 0s here, that's minus 1 over 2. Okay? So I just computed that the value of these two policies is going to be minus 1 over 2. And again this is from the perspective of, of, um, agent A and it kinda makes sense, right? If agent A tells you I'm gonna always show you 1, then probably agent- and, and agent two is following this mixed strategy, agent A is probably losing, and agent A is losing minus 1 over 2 based on- based on this strategy, okay? Okay. So I guess this doesn't seem like we only have this one statement, so it's, we only take one action, in this environment, we have one state, take one action, and that would be the end state. If we had more than one state, Would we have that for every single one. So that opens up a whole set of new questions that you're not discussing in this class. So that introduces repeated games. Ah, so you might be interested in looking at what happens in repeated games. In this class right now we're just talking about this, one step one play. We're playing like zero-sum game um, but we're playing like we'll say, rock-paper-scissors and you just play once. Well you might say well, what happens if you play like ten times then you're building some relationship and weird things can happen and so, so that introduces the whole new class of games that we're not talking about here. All right. So, so the value is equal to minus 1 over 2. Okay? All right. So, so that was a game value. So, so we just evaluated it, right? If someone tells me it's pi A and pi B, I can evaluate it. I can know how good pi A and pi B is, from the perspective of agent A. Okay? So what do we wanna do like when we solve- when we want to try to solve games? All we wanna do is from the agent A's perspective, you wanna maximize this value. I want to get as much money as possible and its values from my agent A perspective. So I should be trying to maximize this, agent B should be trying to minimize this. Right? Like, like think minimax. So agent B should be min- minimizing this. agent A should be maximizing this. That's, that's what we wanna do. But with the challenge here is we are playing simultaneously, so we can't really use the minimax tree. Like if you remember the minimax tree like in, in that setting we have sequential place and and you could like wait for agent A to play and then after that play and that will give us a lot of information, here we're playing simultaneously. So what should we do? Okay so what should we do? So I'm going to assume we can play sequentially. So that's what I wanna do for now. So, so I'm going to limit myself to pure strategies. So maybe I'll, um, I'll come over here. So right now I'm going to focus only on pure strategies. I will just consider a setting- very limited setting and see what happens. And I'm going to assume oh, what if, what if we were to play sequentially, what would happen? How bad would it be if we were to play sequentially? So um, we have the setting where player A plays, goes first. What do you think? Would you think like if Player A goes first, Is that better for player A or is that worse for player A? Worse. Worse for player A. Okay. So, so that's probably what's gonna happen. Try that. [LAUGHTER] Okay. So player A was trying to maximize. Right? This V, player B was trying to minimize, right? And then each of them have actions of either or showing 1 or showing 2. This is player A, this is A, this is agent B. They can show 1, show 1 or 2, right? If we do one- if we show 1, 1, player A gets what? $2? Is that right? It's 2, right? I can't see the board. Um, otherwise player A gets minus $3 if you have 2, 2, player A gets $4. Right? So okay. So, so now if, if we have this sequential setting, if you're playing minimax, then player B is going second. Player B is going to take the minimizer here. So Player B is gonna be like this one and in this case player B is going to be like this one. What should player A do? Well in both cases player A is getting minus $3. It doesn't actually matter, player A could do any of them and player A at the end of the day is going to get minus $3. Right? And this is a case where player A goes first. What if player A goes second, second? Okay? So, so then player B is going first, player B is minimizing and then player A is maximizing [NOISE] and we have the same values here. Okay? So this is, this is player A going second, player A going second tries to maximize. So we'd like to pick these ones. Player B is, is here. Player B wants to minimize. So Player B is going to be like, okay, if you're going second I'd rather, I'd rather show you 1, because by showing you 1 I'm losing less. If I show you 2, I'm losing even more. All right. So, so and then in that setting, we are gonna get to, so player A is going to get $2. Okay? All right. So that was kind of intuitive if we have pure strategies, it looks like if you're going second that should be better. Okay. So, ah, so going second is no worse. It's the same or better. And that basically can be represented by this minimax relationship, right? So, so agent A is trying to maximize. So, so in the second case. [NOISE] In the second case, um, we are maximizing second over our actions of V of a and b, and Player B is going first. So this is going to be greater than or equal to the case where Player A is going, uh, first. Sorry no, not min. That makes sense. V of a and b. So I'm gonna just write these things that you're learning throughout on the side of the board, maybe up here. So what did we just learn? We learned, if we have pure strategies, if we have pure strategies, all right, going second is better. That sounds intuitive and right. [NOISE]. Okay. So far so good. Okay? So the question that I wanna try to think about it right now there is what if we have mixed strategies? What's going to happen if we have mixed strategies? Are we gonna get the same thing? Like, if you have mixed strategies is going second better, or is it worse, or is it the same? So, so that's the question we're trying to answer. Okay? So, so let's say Player A comes in, and Player A says, "Well, I'm gonna reveal my strategy to you. What I'm gonna do is I'm going to flip the coin depending on what it comes. I'm either show- going to show you 1, or I'm gonna show you 2. That's what I'm gonna tell you, tell you that's what I'm gonna do." Okay. So, so what would be the value of the game under that setting? So the value of the game, uh, would be, maybe I'll write it here. So the value of Pi A and Pi B. Pi A is already this mixed strategy of one-half, one-half, right? It's going to be equal to Pi- is this- yeah, actually. All right. So what is that going to be equal to? It's going to be Pi B times 1, right? Pi- so it's going to be Pi B, choosing 1 times one-half. The probability one-half Agent A is also picking 1. If it is 1, 1, we're gonna get 2, right, plus Pi B choosing 1, Pi A with one-half choosing 1, and then we're gonna get minus $3 sort of choosing 2. We're gonna get minus $3, plus Pi B choosing 2, times one-half Pi A choosing due- 2. We're gonna get $4, plus Pi B choosing 2 times Pi A choosing 1, and that's minus $3. So I just, like, iterated all the four options that we can get here, uh, under the policy of Pi B choosing 1 or 2, and then Pi A is always just half, right, because they, they are following this mixed strategy. So well, what is this equal to? Uh, that's equal to minus 1 over 2 Pi B of 1, plus 1 over 2 Pi B of 2. Okay. So that's the value. Okay? So, so again, the setting is someone came in, Agent A came in, Agent A told me, "I'm following this mixed strategy. This is gonna be the, the thing I'm gonna do." What should I do as an Agent B? What should I do as an Agent B? You always want to pick 1. So- okay, so that was too quick. So you always [LAUGHTER] have to do, do 1. But why, why is that? Well, well, if Agent A comes and tells me, "Well, this is a thing I wanna do," I should try to minimize value of Agent A, right? So, so what I'm really trying to do as Agent B is to minimize this, right, because I don't want Agent A to get anything. So if I'm minimizing this, in some sense, I'm trying to come up with a policy that minimizes this. Pi is the probability, so it's like a positive number. I've like a positive part and negative part here. The way to minimize this is to put as much weight as possible for this side and as little as possible for this side. So that tells me that never show 2 and always show 1. Does everyone see that? So, so the best thing that I can do as Agent 2 is to follow a pure strategy that always shows 1 and never shows 2. Okay. So this was kind of interesting, right? Like if someone comes in and tells me, "This is the thing. This is a mixed strategy I'm gonna follow," I'll have a solution in response to that, and that solution is always going to be a pure strategy actually. So, so that's kind of cool. All right. So, so this is actually what's happening in a more general case. I'm gonna make a lot of generalizations in this lecture. So I'll show you one example I generalize it, but if you're interested in details of it, like, we can talk about it offline. So yeah, so, so setting is for any fixed mixed strategy Pi A. So, so Pi A told me what their mixed strategy is. It's a fixed mixed strate- uh, mixed strategy. What I should do as Agent B is I should minimize that value. I should pick Pi B in a way that minimizes that value, and that can be attained by pure strategy. So the second thing that I've learned here, is if Player A plays, uh, uh, plays a mixed strategy, mixed strategy, Player B has an optimal pure strategy. And that's kind of interesting. [NOISE] Right. Okay. So, so in this case, also we, we haven't decided what the policies should be yet, right, like we- we've have started- we've still, we've still been talking about the setting where Pi A- like Agent A comes in and tells us what their policy is, and we know how to respond to it. It's going to be a pure strategy. Okay? So now we want to figure out what is this, this policy. Like what, what should be this mixed strategy actually? So, so I wanna think of it more generally. So, so I wanna go back to those two diagrams and actually modify those two diagrams in a way where we talk about it a little bit more generally. Maybe- yeah, I'll just modify these. Okay. So, um, so let's say that- okay, and, and I'm gonna think about both of the settings. So let's say it again. Player A is deciding to go first. Player A is going to follow a mix- a mixed strategy. So this is all we know, but we don't know what mixed strategy. Play- Player A is going to decide to do- to follow mixed strategy. This is Player A. Player A is maximizing. Player A is following a mixed strategy. The way I'm writing that mixed strategy is more generally saying Player A is gonna show 1 with probability p and is going to show 2 with probability 1 minus p. Or generally like some, some p-value. Okay? And then after that it's Player B's turn. We have just seen that Player B, the best thing Player B can do is, is to do a pure strategy. So Player B is either 100% is going to pick 1 or 100% is going to pick 2. Yes? Player B could really like [inaudible] terms with the same then like Player B following a mixed strategy. That would be the best strategy. You know it's just the same as any pure strategy, does that make sense? For those terms behind on the blue on the board here right there. Yeah. Those terms with the same blue terms, then like Player B can follow any kind of strategy, right? So the thing is that, that strategies are probabilities, right? So they are values from 0-1, and then you kinda always end up with this negative term that you're trying to make as negative as possible and this positive term that you are trying to get as positive as possible. And that's kind of intuitively why you end up with a pure strategy. And by pure strategy, what I mean is you always end up like putting as much possible like 1, like all your probabilities on the negative turn and nothing on the positive turn because you are trying to minimize this. So that's kinda like intuitively why you're getting this pure strategy. One-half and one-half? So, so you wouldn't get 1. So, so that's what I mean. So like, you wouldn't ever get like one-half and one-half. If you get one-half and one-half, that's a, that's a mixed strategy. That's not a pure strategy. And I'm saying you, you wouldn't get a mixed strategy because you would always end up in this setting that to minimize this, you end up pushing all your probabilities to this negative term, okay. All right. So, so, all right, so let me go back to this. So- all right. So we have the setting where Player A goes first. Player A is following a mixed strategy with p and 1 minus p. Player B is going to follow a pure strategy, either 1 or 2. I don't know which one, right? So, uh, what's gonna happen is if you have 1, 1 and then, then that is going to give me 2, value 2, right? So it's 2 times p. I'm trying to write the value here. Am I writing it right? Is it 2 times p plus? Yeah. 1 minus p times 3. Right. So with probability 1 minus p, this guy is gonna pick 2. If this guy picks 1, you're gonna get minus 3, minus 3. Okay? And then for this side, with probability 1 minus p, A is going to show 2. If I'm gonna show 2, then I'm gonna get 4. So it's 4 times 1 minus p. And with probability p, this guy's gonna show 1. I'm gonna show 2. So that is minus 3p. Okay. All right. So what are these equal to? So this is equal to 5p minus 3. That is equal to minus 7p plus 4. Okay? So, so I'm talking about this more general case. In this more general case, Player A comes in. Player A is playing first, uh, and is following a mixed strategy but doesn't know what p they should choose. They're choosing a p and 1 minus p here. And then Player B has to follow, uh, a pure strategy. That's what we decided. And then under that case, we either get 5p minus 3 and minus 7p plus 4, okay?. What should Player B do here? This is Player B and this min node. What should Player B do? Which, which- should, should Player B pick 1 or 2? It should- player B should pick a thing that minimizes between these two. All right? So Player B is going to take the minimum of 5p minus 3 and minus 7p plus 4, okay? What should Player A do? What should player A do? I'm thinking minimax, right? So- so when you think about the minimax, Player A is maxima- maximizing the value. So Player A is going to maximize the value that comes up here. So player is going to maximize that and also, I'm saying Player A needs to decide what P they're picking. So they're going to pick a P that maximizes that. Is this clear? [inaudible] Like these computations? Yeah, so these are the four different, uh, things in my, uh, payoff matrix. So I'm saying is, with probability P, A is going to show me 1, right? And I'm going to go down this other route where B is also choosing 1. So if one- like both of us are showing 1, then I'm going to get 2, right? So I'm going to get $2. So that's where the $2 comes from, times probability P. With probability 1 minus p, A is going to show me 2. I'm going to show 1, that's minus $3, times probability 1 minus p. So, so that's how and and for this particular branch, I know the pay off is going to be 5p minus 3. That makes sense? And then for this side again, like with probability 1 minus p, A is going to show me 2. If it is both of them 2, I'm gonna get $4. That's why it's 4 times probability of 1 minus p. With probability P, A is going to show me 1. So that's why I'll lose $3, that's minus 3 times probability p. So that's minus 7p. Okay. So and then, and then, the second player, what they're gonna do is, they're going to minimize between these two values and they're going to pick 1 or 2. They're gonna- they're deciding, "Should I pick 1 or should I pick 2?" And the way they're deciding that is by trying to pick, pick 1 or 2 based on which one minimizes these two values. But I'm writing it, uh, like using this variable p that's not decided yet. And this variable P is the thing that Player A needs to decide. So what, what p should Player A decide? Uh, Player A should decide the p that maximizes this. So I'm writing like, literally a minimax relationship here. Okay? All right, so the interesting thing here, is beside p minus 3, is some line, right? With positive slope. This is 5p minus 3, let's say. And this minus 7p, plus 4 is another line. Minus 7p plus 4. It's another line with negative slope. What is the minimum of this? Where is going to be the minimum of this happening? Minimum of these two lines? Where they meet each other, right? This is going to be the minimum of the two. Okay? So, so the p that I'm s- going to pick, is going to be actually the p, where, th- th- the value of p, where these two are equal to each other and that turns out to be at, I don't know what it is, 7 over 12 or something. Actually I don't remember this- what is this value? Yeah, so it's going to happen at 7 over 12. And the value of it is minus 1 over 12. Right? So okay, so let's recap. Okay, what did I do? So I'm talking about the simultaneous game, but I'm relaxing it and making it sequential. I'm saying A is going to play first, B is playing second. The thing that's going to happen is A is playing first, A is deciding to choose a mixed strategy. So A is deciding to say maybe one half, one half, but maybe he doesn't wanna say one half, one half, he wants to come up with some other probabilities. So the thing A is deciding is, "Should I pick 1 with probability p and should I pick 2 with probability 1 minus p and what should that p be?" So, so what is the probability I should be picking 1? So that's what A is trying to decide here. Okay? So whatever A decides with p and 1 minus p, ends up in two different results and based on them, B is trying to minimize that. When B is trying to minimize that, B is minimizing between these two linear functions. These two linear functions meet at one point, that is the point that this thing is going to be minimized and that actually corresponds to a p-value when A tries to maximize this. This is I know a little bit- this requires a little bit of thinking, but any clarification questions? Any- I see a lot of lost faces, so- [LAUGHTER] By having, um, [inaudible]. Yeah and then that the- yeah, the interesting point is exactly right. Yeah, so A is still by the way losing. So even in this case, where A is trying to come up with the best mixed strategy he could do, the best mixed strategy A is doing is show, show a 1 with probability 7 over 12 and show 2 with probability 5 over 12. This comes from here. Even under that scenario, A is losing. A is losing, minus 1 over 12. Okay? All right. Okay. So also, I haven't solved a simultaneous game yet, right? Like I have talked about the setting where A plays first. So what if B plays first? So I'm going to swap this. What if B plays first? So A goes second, B plays first. I'm gonna modify this one now. Okay, B goes first, A is going second. B is gone to start- is going to reveal the strategy- his strategy. The strategy that B is going to reveal, is also again, I'm gonna with probability p show you 1, with probability 1 minus p, show you, show you, uh, 2. Then A plays, A is trying to maximize. And A has to play a pure strategy because of that, right? Like the best thing A can do, is going to be a pure strategy. So A is always going to be either showing 1 or 2 and A is deciding which one, but doesn't know yet. And the values here are going to be exactly the same thing as there. So they're 5, 5, 5, 5p minus 3, minus 7p plus 4. Okay? All right. So what's happening here? So, so in this case, A is playing second. What A likes to do is A likes to maximize between 5p minus 3 and minus 7p plus 4. That's what A likes to do. B is going second, uh, sorry, B is going first, so then B has to minimize that and pick a p that minimizes that. Okay? So these two are exactly the same two lines but now I'm picking the maximum of them. The maximum of these two lines end up being exactly the same point as before, ends up being exactly the same p as before and giving you exactly the same value as before. So, so this is also equal to minus 1 over 12. So what this is telling me is, if you are playing a mixed strategy, even if you reveal your best mixed strategy at the beginning, it doesn't matter. It actually doesn't matter if you're going first or second. So like in the moral game when you're playing, if you were playing a mixed strategy and you would tell your opponent, "This is the thing, I'm gonna do and this is a mixed strategy," actually and if it was the optimal thing, like, like it didn't matter like if they know, know it or not, like you still get the same value. So again, you get 5p minus 3 and minus 7p plus 4. And then now you're minimizing or a maximum of these two lines, maximum of these two lines end up being at the same point and you pick a p that, that kind of maximizes that and you get the same value. So this is called the von Neumann's theorem. So von Neum- like this whole thing that you just, did over just one example, there is a theorem about it that says, for every simultaneous two-player zero-sum game, with a finite number of actions, the order of players doesn't matter. So B is playing second or B is playing first, the values are going to be the same thing. If you're minimizing or are maxim- or maximum or min- minimum of that value, it's going to be the same thing. Okay? So this is kind of the third thing that we just learned, which is von Neumann's Theorem, which says, if- I- I'm writing a modification of a simpler, shorter version of it. So if playing a mixed strategy, order of play doesn't matter. So remember, if you play mixed strategy, your opponent. And remember, if you play mixed strategy, your opponent is going to play pure strategy because this is like this the first point that we had before it. All right? If you, if you play mixed strategy, your opponent is going to follow a pure strategy. Either 1 or 2 with probability 1. [NOISE] But with probability p, like, if we're doing like ordering, like one of the two answers might- will come out, [inaudible] it'll be either one or two and then in that case, the second [inaudible]. So in this case, yeah. So, uh, the thing is these two end up being equal. So the way to- it doesn't, it doesn't matter because the way for you to maximize this is going to be the point where the two end up being equal. So the two branches, like if you actually plug in p equal to 7 over 12 here, like these two values end up being equal. Equal, right? [inaudible]. [OVERLAPPING] Uh, none [inaudible] actually equal and the reason that they end up being equal is you are trying to minimize the thing that this guy is trying to maximize. So you are trying to pick the p that actually makes this thing equal. So no matter what your opponent does, like you're gonna get the best thing that you can do. So, so yeah, like think of it like this. Okay. So I'm player A, I'm, I'm still- I still have a choice. My choice is to pick a p. I want to pick a p that I'm not gonna wi- like lose as much. What p should I pick? I should pick a p that makes these choices the same. Because if I pick a p that makes this one higher than this one, of course the second player is going to make me lose and then go down a route that's, that's be- better for the second player. So the best thing that I can do here is make these two as equal as possible. So then the second player whatever they choose, choose one or two, I guess it's gonna be the same thing, it's gonna be- does that make sense? So sounds no in expectations, like you're multiply by p and 1 minus p as you were saying, like if the [inaudible]. [OVERLAPPING] So in expectation when- you're saying when you are choosing p? Yes, so I'm choo- I'm treating p as a variable that I'm deciding, right? Like p is the thing I gotta be deciding. So I'm player A, I gotta be citing a p. That's not gonna be too bad for me. Like let say I would pick a p that doesn't make these things equal. Let's say, I don't know, I would pick a p that makes this guy I don't know 10 and this makes this guy 5. The second player is of course going to make me lose and of course is going to like pick the thing that's going to be the worst thing for me. So the best thing I can do is I can make both of them, I don't know, 7. So it's not gonna be as bad. So, so that's kind of the idea. All right. So let me move forward because there's still a bunch things happening. All right. So, so okay. So the kind of key idea here is revealing your optimal mixed strategy does not hurt you which is kind of a cool idea. The proof of that is interesting. If you're interested in looking at the notes, you can use linear programming here. The reason, kind of the intuition behind it is, is if you're playing mixed strategy, the next person has to play pure strategy and you have n possible options for that pure strategy. So that creates n constraints that you are putting in for your optimization. You end up with a single optimization with n constraints, and, and, and you can use like linear programming duality to actually solve it. So, so you could compute this using linear programming and that's kind of the one that's here. So, so let's summarize what we have talked about so far. So, so we have talked about these simultaneous games, er, and, and we've talked about the setting where we have pure strategies, and we saw that if you have pure strategies, going second is better. Right. Going second is better if you are just telling you what's the pure strategy you're using, right? So that was kind of the first point up there. And then if you're using mixed strategies, it turns out it doesn't matter if you're going first or second. You're telling them what your mixed- best mixed strategy is and they're going to respond based on that. So that's the von Neumann's minimax theorem. Okay? All right. So next 10 minutes, I want to spend a little bit of time talking about non-zero-sum games. So so far we have talked about zero-sum games, uh, where it's either minimax, I get some reward. You get the negative of that or vice versa. There are also these other things called collaborative games where we are just both maximizing something. So, so we both get like money out of it, and, and that's kinda like a single optimization. It's a single maximization and you can think of it as plain search. In real life, you're kind of somewhere in between that, and, and I want to motivate that by an example. So, uh, I want to do that b- by this idea of Prisoner's dilemma. How many of you have heard of Prisoner's dilemma? Okay. Good. Okay. So the idea of Prisoner's dilemma is you have a prosecutor who asks A and B individually if they will testify against each other or not, okay? If both of them testify, then both of them are sentenced to five years in jail. If both of them refuse, then both of them are sentenced to one year in jail. If one testifies, then he or she gets out for free and, and then the other one gets 10 years sentence. Play with your partner real quick. [NOISE] All right. [LAUGHTER] Okay. Okay, so let's look at the pay off matrix. So I think you kind of have an idea of how the game works. Is that A or B? So, uh, so you have two players A or B. Each one of you have an option. You can either testify or you can refuse to testify. So you can- B can testify and A can refuse to testify, and I am going to create this payoff matrix. This payoff matrix is going to have two entries now in each one of these, these cells. And, and why is that? Because we have a non-zero-sum game. Before, our payoff matrix only had one entry. Because this was for player A, player B would just get negative of that. But now player A and B are getting different values. So if both of us testify, then both of us get five years jail, right? So A gets five years of jail, B gets five years. Right? If both of us refuse, A gets one year of jail, B gets one year of jail. One year, one year of jail. And then if it is a setting where one of us testifies, the other one refuses, one of us gets 0, the other one gets 10 years jail. So if I refuse to testify, then I get 10 years jail right away and then B gets 0. And then in this case, A gets 0 and B gets 10. Okay? So the payoff matrix is now going to be for every player we are gonna have a payoff matrix. So now we have this, this B value function which is a function of a player. For policy A and policy B, will be the utility for one particular player, because you might be looking at it from perspective of different players. Okay? So the von Neumann's minimax theorem doesn't really apply here because we don't have the zero-sum game. But do you actually get something a little bit weaker, and that's the idea of Nash equilibrium. So a Nash equilibrium is setup policies Pi star A and Pi star B so that no player has an incentive to change their strategy. So, so what does that mean? So what that, that means is if you look at the, the value function from perspective of player A, value function from perspective of player A at the Nash equilibrium at Pi star A and Pi star B is going to be greater than or equal to value of, of any other policy Pi A if you fix Pi B. Okay and at the same time the same thing is true for value of B. So for agent B, value of B at Nash equilibrium is gonna be greater than or equal to a value of B at any other Pi B if if, if Pi A fixes their policy. Okay? So, so what does that mean in this setting? Do we have a Nash equilibrium here? So let's say I start from here. I start from A equal to minus 10, B equal to 0. Can I get this better? Can I make this better, or did I flip them I all? [NOISE] Okay. Flip, right? 0 minus 10, er, minus 10, 0. Okay. So let's say I start from here. Can I, can I get this better? Can I make this better? I start from this cell, A gets 0 years of jail. That's pretty good. B gets 10 years of jail. That's not that great. So B has an incentive to change that. Right? Like B has an incentive to actually move in this direction. Right? So B has an incentive to get 5 years jail instead of 10 years. Similar thing here. What if we start here? A has 1 year of jail, B has 1 year of jail. A has an incentive to change this now and get 0 years jail. B has an incentive to change this and get 0 years jail. And we end up with this cell. Where like, we don't have any incentive to change our strategy. So we have one Nash equilibrium here and that one Nash equilibrium here is, is both of us are testifying and both of us are getting 5 years jail. Just kind of interesting because there is like a socially better choice to have here, right? Like both of us, like if both of us would refuse, like we would each get 1 year jail but that's not gonna be a Nash equilibrium. Okay? All right. So there's a theorem which is, er, Nash's existence theorem which basically says if any finite player game with a finite number of actions, if you have any finite player game with a finite number of actions, then there exists at least one Nash equilibrium. And then this is usually one mixed strategy Nash equilibrium, at least one mixed strategy Nash equilibrium. In this case, it's actually a pure strategy Nash equilibrium. Uh, but, but in general, there is at least one Nash equilibrium if you have a game of this form. Okay? All right. So, uh, so let's look at a few other examples. Two-finger Morra. What would be the Nash equilibrium for that? So we just actually solve that using the minimax- von Neumann's minimax theorem, right? So there would be if you're playing a mixed strategy of 7 over 12 and 5 over 12, you might, you might kind of modify your Two-finger Morra game and make it collaborative. So in a collaborative setting, uh, what that means is we both get $2 or we both get $4 or we both lose $3. So, so a collaborative Two-finger Morra game, it's not a zero-sum game anymore and, and you have two Nash equilibria. So, uh, you would have a setting where A and B both of them play 1 and the value is 2, or A and B both of them play 2 and the value is 4. Okay? And then Prisoner's dilemma is the case where both of them testify. We just, we just saw that on the board. All right. Okay. So summary so far is we have talked about simultaneous zero-sum games. We talked about this von Neumann's minimax theorem, er, which has like multiple minimax strategies and a single game value, right? Like we had a single game value because it was zero-sum. But in the case of non-zero-sum games, er, we would have something that's slightly weaker that's Nash's existence theorem. We would still have multiple Nash equilibria, we could have multiple Nash equilibria. Uh, but we have multi- we also have multiple game values from- depending on whose perspective you are looking at. So this kind of was just a brief like short introduction to game theory and econ. There's a huge literature around different types of games, uh, in game theory and economics. If you're interested in that, take classes. And yeah, there are other types of games still like Security Games and or resource allocation games that have some characteristics that are similar to things we've talked about. If you're interested in any of them, maybe you can take a look at them, would be useful for projects. And with that, I'll see you guys next time. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Machine_Learning_2_Features_Neural_Networks_Stanford_CS221_AI_Autumn_2019.txt | Okay. [NOISE] Uh, welcome back everyone. This is the second lecture on machine learning. Um, so just before we get started, a couple of announcements. Um, homework 1 foundations is due tomorrow at 11:00 PM. Note that it's 11:00 PM, not 11:59. Um, and please I would recommend everyone try to do a test submission early, right. Um, it would be unfortunate if, uh, you wait until 10:59 and you realize that your computer, uh, you can't login to the website. Um, if that happens, please don't just bombard me or- or with emails. Just- just wondered- so there is- you can- you can resubmit as much as you want before the deadline? So there's no penalty to just submitting something and checking to make sure it works. Yeah. So just to remind you, you're responsible for any technical issues you encounter, so please do the test submission early. So you have peace of mind, and then you can go back to finishing your, um, your homework. Okay? Uh, homework 2, sentiment is out. This is the homework on machine learning, um, and it will be due next Tuesday. Um, and finally, there's a section this Thursday which will talk about, uh, back propagation and nearest neighbors, and maybe a overview of scikit-learn which might be useful for your projects. So please, uh, come to that. Okay. So let's jump in. I'm gonna spend a few minutes reviewing what we did last time. It's kind of starting at the very abstract level and drilling down into the details. So ab- abstract level, learning is about taking a data-set and outputting a predictor F, which will be able to take inputs x, for example an image, and output a label or output y, for example whether it's a cat or a truck or so on. And if you unpack the learner, we talked about how we want to frame it as a optimization problem which captures what we want to, uh, optimize, what properties a predictor app should satisfy. And apart from the optimization algorithm, which is how we accomplish our, um, objective. So the optimization problem that we talked about last time was, uh, minimizing the training loss. Um, and in symbols, this is the training loss which depends on a particular weight vector. Is the average over all examples in the training set of the loss of that particular example, uh, with respect to the wave vector w. Okay. And we want to find the w that minimizes the training loss. So we want to find the single w that, um, makes sure that on average, all the examples have low loss. Okay. So looking at the loss functions, um, now, this is where it depends on what we're trying to do. If we're doing regression, then the pertinent thing to look at is the residual, which remember, is the model's prediction minus the true label. So this is kind of how much we overshoot. And the loss is going to be zero if the residual is zero, and in-increases either quadratically for the square loss, or linearly for the absolute, uh, deviation depending on how much we want to penalize large, uh, deviations. Um, for classification or binary classification more specifically, um, the pertinent quantity to look at is the margin, which is the score times, uh, the label y, which remember is plus 1 or minus 1. So the margin, um, is a single number that captures how correct we are. So a large margin is good. In that case we, uh, obtain either a 0 or a near 0, uh, loss. And margin less than 0 means that we're making a mistake. So the 0 in loss captures that we're making a mistake of a loss 1. Um, but, uh, the hinge loss and logistics loss kind of grow linearly, because it allows us to optimize the function better. Question? So I have a question about residuals. Yeah. Like, I know that I see the regression curve, the loss squared curve there with the residual- what- what would a residual look like on a graph? Would it be just a point away from the resid- away from the regression curve? Or what would the residual look like on this graph, if you were to put it? Um, so there are multiple graphs, uh, here. So remember last time we looked at residual. If you look at, um, x or rather Phi of x over y, so here's the line. Um, here is a particular point Phi of x, um, Phi of x, uh, y. And the residual is the, uh, basically the difference between the model's prediction and, uh, the actual point here. This graph is different. This graph is, um, visualizing, um, in- is- um, in a different space. Right. I'll show you another graph that might make some of these things, uh, a bit clearer in a second. So the residuals won't look exactly like that on this curve of velocity graph? Correct? Um, well okay, I guess one way to think about the residual is, um, the residual is a number. So if your residual is 2, then you're kind of here, and this is the loss that you pay, which is, uh, 2 in this case. Oh. And if the residual is minus 2, then you pay, uh, 2. So the residual is the x-axis? Okay. Yes. The residual is the x-axis here. Oh, okay. Okay. And the margin is the x-axis over here. All right. Yeah. Okay. Any other questions about this? When would you use the absolute value? [BACKGROUND] Um, yeah. The question is, when would you use absolute value versus the square loss? Um, there is a slide from the, uh, previous lecture which I skipped over which talks about when you would want it. Um, most of the time people tend to use the square loss because it's easier to optimize, but, um, you also see absolute, um, you know, deviation. Um, um, the- the square loss will penalize large outliers a lot more. Which means that it has kinda mean- uh, mean-like, uh, qualities. Whereas the absolute deviation, um, penalizes less, so it's more like a median, uh, just for kind of intuition. Um, but the general point is that all of these loss functions capture properties of a desired predictor. They basically say, hand me a predictor, and I'll try to assess for you how good this is, right. This is kind of establishing what we want out of it. And, um, you know, also another comment is that, you know, I'm presenting this loss minimization framework because it is so general. Anything basically that you see, um, in machine learning can be viewed as some sort of, you know, loss minimization. If you think about PCA or deep neural networks, um, different, um, types of auto-encoders, they can all be viewed as some sort of a loss function, um, which you're trying to minimize. So, um, tha- that's why, uh, I'm kind of keeping this ge-framework somewhat general. Okay. So let's, uh, go to the opposite direction of generality. Let's look at a particular example, and try to put all the pieces together. Um, so suppose we have a simple regression problem. We have three training examples: 1, 0, the output is 2, 1, 0, the output is 4, and 0, 1 the output is minus 1. Right. Um, so, um, how do we visualize what learning on this, uh, training set looks like? Um, so let's try to form the training loss. The training loss, remember, is the average over the losses on the indivi- individual examples. So let's look at the losses on individual examples. Um, so we're doing linear regression, so and x is two-dimensional, and Phi of x equals x. So, uh, in this example, so, um, we're basically trying to fit two numbers, w_1 and w_2. Um, so if you plug in these values for x and y into this loss function, then you get the following quantities. So the dot product between w and x is just w_1, right. Um, because x- x2 is 0. And you minus 2 and you square it because we're looking at the square loss. Um, the same thing for, uh, this point instead of 2 you have a 4. Um, and then for this point, um, uh, w.Phi of x minus y is w_2 now because, um, now the- the x2 is, uh, active, uh, minus minus 1 squared. Okay. So these are the individual loss functions. Each of which tells what I kind of want out of w. So if here I'm looking at this, if w_1 is 2, then that's great, I get a loss of 0. This one says if w_1 is 4, then that's great, and I get a loss of 0. And obviously you can't have both. And the goal of the training loss is trying to look at the average, so that you can pick one w that works for, as kind of on average, is good for all the points. Okay? So now, this is a function in two dimensions. It depends on uh, w_1 and w_2. So let me try to draw this on the board to give you some more intuition what this, uh, looks like. Okay. So I'm gonna draw a w_1, uh, w_2. And so the first, uh, function is, uh, w_1 minus 2. Okay. So, um, so what does this function want to do? It wants w_1 to be close to or, uh, close to 2, and it doesn't care about w_2. Right? So, um, I'm not really sure how to draw this function, but it it really requires something in 3-D. So you can think about a ball-shape kind of coming out of the board, uh, like this, if this direction is meant to be the- the loss. Okay. So I'm gonna try to do, uh, um, well let's- let's try it this way. So it's going to be like I have kind of a bunch of, um, problems that look like this coming out of the board. Okay? Uh- Um, okay. So what about the second one? The second one is, uh, w1 minus 4 squared. So that's going to be basically the same thing [NOISE], but kind of centered, uh, around 4. So around this axis. Okay. So again, there is gonna be some parabolas coming out of the board. Um, and then finally, the other point is, uh, w2 minus, minus 1. So it's going to be, um, happiest when, um, um, w2 is minus 1. Um, so it's going to be kind of a bunch of, uh, parabolas coming out of the board here, okay? So you add all three functions up, and what do you get? You get something that is, um, has- first of all, where do you think the minimum should be? One of the two intersections of the [NOISE] on the- One of the two intersections. Yeah. Like the first, like the first, uh, vertical and horizontal or the second square vertical and horizontal. Oh, the red lines, I mean. Oh, yeah. There's gonna be some sort of intersection here. So if you look at, um, the w2 axis, right, um, it should definitely be minus 1, because this is the fun- only function that cares about w1. So it's gonna be somewhere here and both by symmetry, while this one wants it to be a 2, this one it wants it to be a 4, so the average is somewhere between. You can work all of this kinda actually mathematically out, I'm just kinda giving the rough intuition. Uh, and now let me draw the level curves here. The level curves are going to be something like this where, um, ag- again, if you draw it in 3D, it's like a parabola, uh, or- coming out of the- a board here, um, where here's the lowest point. Um, and as you venture away from this point, your loss is going to increase. [NOISE] Right? Okay. Yeah. Can you explain that bit again, that middle point? Uh, how do I get this middle point? Um, [NOISE] so one way is that if you add these two functions up and it kind of, um, just, you know, plot it, uh, it turns out to be a 3. Intuitively, um, the, the square loss when you average, uh, it acts kind of like, uh, um, a mean. So kind of, you know, it's gonna be somewhere in between. It's also, um, related to one of the homework problems. So hopefully, you'll h- have a better appreciation for that. Um, okay. So, so I guess- yeah, question. Once we have the 3, how do we merge it with the negative 1 as well? Do we need to do another addition? Um, so the question is, once we have the 3, how do you merge it with a minus 1? Um, so the 3 is regarding w1 and the minus 1 is regarding w2. So you just add them together. They kind of don't- in this particular example, they don't interact. In general, they will. I still like this example, could you quickly summarize exactly what's going on with this example. Yeah. So this plot shows for every possible wave vector w1, w2, you have a point and the amount that the function comes out of the board is the loss, right? And the loss function is defined on- in the slides, right there. And all I'm doing is trying to plot this loss function. Okay. So it's actually w1 and w2 points, the loss is coming out of the board you're plotting. Yes? No? Um, so unfortunately, it's hard to kind of draw it in 3D here. So- Okay. What I'm trying to do here is taking each of the pieces and trying to explain what each piece is trying to do. All right. Yeah. Okay. So, um, in general, the, the training loss, you don't have to think about kind of how exactly it composes the individual losses. Um, this is probably as complex of an example we'll have to, you know, we'll get to right trying to understand it. Um, but this kind of gives you an idea of how you connect these pictures where you see kind of these are parabolas, um, with, uh, the picture which is actually the- of the, you know, training loss. Okay. But for now, let's assume you have the training loss. It's a function of the parameter- it's some function. And how do you optimize this function? So you do some sort of gradient descent. So last time we talked about how you can just do a v- vanilla gradient ascend where you'd initialize with 0. And then you compute the gradient of that entire training loss. And then you update once. And the problem with that is the, uh, up to computing the gradient requires going through all the training examples, and if you have a million training examples that's really slow. So instead we looked at stochastic gradient descent which allows you to pick up an individual example and then make a gradient step right away, right? And, um, empirically we SHA encode how it can be a lot faster, you know. Of course there are cases where, um, it can also be less stable. So there's kinda in general going to be some, you know, trade off here. But by and large, stochastic gradient descent it kind of really dominates, um, you know, machine learning applications today because you- there's only way to really kind of scale to large, um, you know, datasets. Okay. Yeah. Is there any other benefit of stochastic gradient descent or gradient descent? Um, so apart from being able to scale up, is there any advantage of stochastic gradient descent? Um, another besides computation, another advantage might be that, um, your data might be coming in an online fashion, like over time, and you want to, you know, update kind of on the fly. Um, so there are cases where you don't actually have all the data at once. Okay. So that was a quick overview of the general concepts. Um, now to set the stage for what we're gonna do in this lecture, I wanna ask you guys the following question. So can we obtain decision boundaries, um, remember a decision boundary is the- the- it's kind of a line that or the curve that separates the region of the space which is classified positively versus negatively. Um, can we obtain decision boundaries which are circles, um, by using linear classifiers? Okay. So, um, does that make sense? So we want to get something like this, um, where you have- now we're going into, uh, Phi1 of x, um, you know, Phi2 of x. And we want decision boundaries that look like this where you classify maybe these as positive and these as negative. Okay. Is that possible? Yeah. If you map, if you like take a square of those inputs. Then you get something to be linear. You [inaudible]. [OVERLAPPING] Yeah, yeah. Okay. [LAUGHTER] So you're saying, yes? Yeah. Okay. [inaudible]. Okay. Uh, okay. Well, there's a punchline there. Um, so it turns out, um, that you can actually do this which maybe on the surface seems kinda surprising, right? Because we're talking about linear classifiers. But as we'll see it really depends on what you mean by linear classifiers and hopefully that will become clear soon. Okay. So we're gonna start by talking about features which is going to be able to answer this question. Then we're gonna shift gears a little bit and talk about neural networks which is, uh, in some sense an automatic way to learn features. And we're gonna, ah, show you how to train neural networks using back propagation hopefully without tears. And, um, then talk about nearest neighbors which is another way to get really expressive models which is gonna be, um, much simpler in a way. Okay. So recall that we have the score. So the score is a dot product between the wave vector and the feature vector, and the score drives prediction. So if you're doing regression, you just output the score as the number. If you're doing classification, binary classification then you output the sign of the score. Um, and so far we've focused on learning which is how you choose the wave vector based on a bunch of data and how you optimize for that. And so now what we're gonna do is focus on phi of x, um, and talk about how you choose these features in the first place. And this actually, feature extraction is such a really critical important part of kinda a machine learning pipeline which often gets neglected because when you take a class you're saying, Okay, well there's some feature vector and then let's focus on all of these algorithms. But whenever you go and apply machine learning in the world, um, feature extraction, um, turns out to be kinda the main bottleneck. And neural nets can mitigate this to some extent but it doesn't completely make feature extraction um, obsolete. So recall that a feature extractor takes an input, um, such as this uh, string and outputs a set of properties which are- are useful for prediction. So in this case, it's a set of um, named feature values okay. And last time, we didn't really say much about this. We just kinda waved our hands and say, okay here's some features. So you in general how do you approach this problem, what features do you include? Um, do you just like start making them up and how many features you have? We need maybe a better organizational principle here. Um, and you know in general a feature engineer is gonna be someone of art. So I'm not gonna give you a recipe, but at least some framework for thinking about features. So the first notion um, is a feature template, and a feature template is informally just a group of features that are all computed in the same way. Um, this is kind of a somewhat pedantic but kinda, um, a terminology point that I want you all to kinda be aware of. Um, so a feature template is basically a feature um, name with holes. So for example length greater than blank. So remember the concrete feature has length greater than 10. Now, we're gonna say length greater than blank, where blank can be replaced with 10, 9, 8 or any kind of number. And it's a template that gives rise to multiple features. Last week, characters equals blank, contains character blank. These are all examples of feature templates. So when you go in your project or whatever and you describe your features or when you think about kind of grouping these features in terms of, you know, um, these blanks. Another example is pixel intensity of position. So even if you have what you consider to be like a raw input, like an image, right? There's still implicitly some sort of way to think about it as a feature template, um, which corresponds to the pixel intensity of position, blank comma blank, ah, is a feature template where it gives a rise to the number of features equals to the number of pixels in the image. And this is useful because maybe your input isn't just an image. Maybe it's an image plus some metadata. Then having this kind of language for describing all the features in a unified way is really important for clarity. Okay. So as I alluded to, each feature template maps to a set of features. So by writing last three characters equals blank, I'm implicitly saying, well, I'm going to define a feature for each value of blank and that feature is gonna be associated with a value which is just the natural evaluation of that feature on the input. Okay. So all of these are 0 except for ends with .com is 1. Okay. So, um, and in general you are going to have each feature template that might give rise to many, many features, right? Um, the number of possible three-letter characters, you know some number of characters to a cube which is a large number. Ah, so one question is how do we represent this, right? First vector. Yes, first vector. [LAUGHTER]. Yeah good answer. So mathematically, it's really useful. Just think about this vector as a d-dimensional vector, right. Just d numbers just laid out, right? And because that's mathematically convenient but when you go to actually implement this stuff you might not represent things that way. In particular, you know, what are the ways you can represent a vector? Well, you can say, I'm going to represent it as an array which is just this list of numbers that you have. But this is inefficient if you have a huge number of features. But in the cases where you have sparse features which means that only a very few of the feature values are non-zero, then you're better off representing as a map or in Python, a dictionary, which you specify the feature name um, is a key and the value is, you know, the value of that feature, right? And all the,um, the- the home homework two will basically work in this sparse feature framework. Um, and you know, just kind of a note, a lot of, um, especially in NLP and we have discrete objects and traditionally, it's been common to use kind of these sparse feature maps. Ah, you know one thing that has happened with the rise of neural networks is that often um, you take basically your inputs and embed them into some sort of fixed dimensional vector space and dense feature representations have been more um, dominant. But sparse features if you wanna use linear classifiers is still kinda a good way to go. So it's important to understand this. Okay. So now instead of storing possibly a lot of features now you just store the key and the value. All right. Um, So this was the feature templates. The overall point is that it's kind of organizational principle, um, and you know, um, okay so now let's switch gears a little bit. So which features or feature templates should you actually write down? And to get at that, I wanna introduce another notion which is pretty important especially if you think about the theory of machine learning, and that's the notion of a hypothesis class. Okay. So remember we have this predictor. So for a particular wave vector, that defines a function that maps inputs into some sort of score or prediction. And the hypothesis class is just the set of all predictors that you can get if you vary the wave vector. Okay. So- so let me give you um, we're gonna come back to this slide. Let me give you a kinda example here. So suppose you are doing a regression and you're doing linear regression in particular. So you're in one dimension. Here is x and, um, here is, ah, I guess y. Um, so if your feature map is just identity, so maps x to x, um, then this notation just means the set of all, ah, linear functions like this. Then the set of functions you get you can visualize um, as this, right? So you have one function here and for every possible value of w1, you have, ah, a slope. You also have 0. They should all go through the origin. Um, so you have- these are your functions, right? So your hypothesis class F1 here is essentially all lines that go through the origin. Okay. So just wanna think about it when you write down a feature vector you're implicitly committing yourself to saying hey, I want to think about all possible predictors defined by this feature map. Okay. So here's another example. Suppose I define the feature map to be x comma x squared. Okay. So now what are the possible functions, you know, I'm gonna get? So does anyone wanna say what, read off this slide what it is [LAUGHTER]. It's gonna be all quadratic functions, right? Okay. So in particular, because they don't have a bias term, it's gonna be all quadratic functions that go through the origin. So let me actually draw another. [NOISE]. Um. So it's gonna be all quadratic functions that go through the origin, which look like this, but it could be upside down. Um, and do it like that, I'm not gonna draw all of them. Um, in particular, it also includes the linear functions, right? Because I can always set w_2 equals 0, and vary w_1 which means that I also get all the linear functions too, right? So this means that w- f_2, if you think about the set of functions is a larger set than f_1, it's more expressive. That's what we mean by expressive. That means that it can represent more things. Okay. So for every feature vector, you should think also about the set of functions that you can get by, uh, that new feature vector. Okay. So let's- is there a question? Yeah. We need to assess the time here- the best set of w's, are- are the more expressive sets harder to optimize over? The question is, are the more expressive sets harder to optimize over? In terms of, ah, you know, the short answer is not necessarily. Um, um, in terms of- sure, you have more features so that it req- is more expensive. Yeah. At that level, um, but the difficulty optimization depends on a number of different, you know, factors. Um, and sometimes, adding more features can be easier to optimize because it's easier to figure out training data, um, okay. So now, let's go back to this picture. Okay. So, uh, this is- on the board is concrete examples of feature or, or ah, uh, hypothesis classes. Um, now, let's think about this big blob as the set of all predictors. Any predictor in your wildest dreams, you know, they're in this, this set. Okay? And whenever you go and you define a feature map, that's going to carve out, uh, you know, much smaller set of, um, you know, uh, functions, right? And- and then what is learning doing, learning is choosing a particular element of that, um, function family based on the data. Okay. So this picture shows you kind of the full pipeline of how you're doing machine learning. Is, you know, there- you first declare structurally a set of, ah, of functions that you're interested in, and then you say [NOISE] okay, now, based on data, let me go and search through that set and find the one that is, you know, best for me. Okay. So now, there are, you know, two places where things can go wrong. Well, for feature extraction, maybe you, um, didn't have enough features. So now, yours- your- your, uh, purple set is too small. Then, no matter how much learning you do, you're just not going to get good accuracy. Right? And then conversely, even if you define a nice, um, you know, uh, hypothesis class, if you don't optimize properly, you're not gonna find the element of that null hypothesis class that fulfills your, um, your goals. Question? The function F- the feature function is extracted to get from the input, since that, you know, self as a function, how come you can assume that your weights, will be able to compute, um, that function also? So the question is- so you're defining a function Phi, right? This is fixed. Um, and then learning sets weights, and together jointly, they specify a particular function or predictor. There's something that saying that if you don't choose Phi appropriately, you're limiting the space they will be able to predict. Yeah. But so I'm wondering like why my under- my intuition tells me that the whole point of learning is that, uh, regardless of the Phi that you choose, the actual model that you choose should be able to, you know, learn the function Phi that you would have picked. Ah, I see. So the question is, ah, does- doesn't learning kind of compensate and just figure out the Phi that you would have picked. Um, so the answer is- short answer is no. The- the Phi is really kind of a bottleneck here. For example, it just- if you define Phi to be, um, X, so that's the linear function. Linear function is all you're going to get. Right? So if your data moves around, um, in a sinusoidal way, you're just gonna, like, fit a line through that and you'll get, you know, horrible accuracy. And no amount of learning, um, can, you know, fix that. The only way to fix that is by, um, changing your, you know, feature representation. So does that assume that W is- is a linear model though? So yes. So all of this assumes that W- we're talking about linear predictors here. Okay. But of course, the same general idea applies to any sort of function family in neural nets. Um, the arc- so the equivalent there would be not just, ah, the feature map, but also the neural network architecture. It's a constraint on what kind of things you can express. So if you have in your- only a two-layer neural network, then there's just some things that you just, you know, ah, with ah, with a fixed size, there's just some things you just can't express. Yeah. Another question. Just to follow onto that as well as an alternative interpretation of the question, I felt it was more of a question of why for a visualization rather than kicking in the raw data, and have like a neural net it still functions linear classifiers, but it has enough complexity that it can strive for non-linear behavior. Yeah. So the question is why bother doing feature in engineering? Hasn't neural nets kind of basically, solved that? Um, so to some extent, the- the amount of feature engineering you have to do today is, you know, much less. One thing that I think it's still important to think about in feature engineering is it's really- think about it as what sources of information you want to, you know, predict. For example, if you want to predict, um, you know, this, uh, you know, some property about a movie review. And you know what- what if- part of the first-order bits are like what even goes into that. Does it- the text go into that? Do you have metadata? Do you have other star ratings? And those are [NOISE] you know, features you can- there's- I guess no such thing as like raw, um, because there's always some code that takes, you know, the- the, you know, the world and distills it down into something that fits in memory. So that's you can think about it as feature extraction. Thank you. Yeah. Okay. One last question and then I'll go. What is the problem with, uh, too many features, don't you want your hypothesis class to be too big, is it like an overfitting thing? Yeah. Yeah. Um, so the question is why don't you just make Phi as large as possible, throw on all the features, and overfitting is, um, you know, one of the main concerns there which, you know, we'll come back to in the next lecture. Okay, great questions. Um, so let's, um, let's actually skip over this, um. So there's another type of, uh, feature function you can define, but in interest of time, I'm going to skip over that. Um, okay, so now let's come back to this question, this linear- I keep on saying near linear predictors. So what, what, what is linear, right? Uh, so remember the prediction is driven by the score. Right. So here's a question. Is this score linear in w? Yes, right? Because, um, what is a, you know, linear function is basically some kind of weighted, er, combination of your inputs. Okay, so is it linear in Phi of x? By symmetry, it should be because it's just a dot-product. So is it linear in x? No, in fact this, this question doesn't even make sense because think about x. X remember was a string. Right, it's not a- it's not even a number. So, um, and that's when you know the answer should be no because you know it doesn't, there's a type error. Um, okay so here's, here's kind of the cool thing now is, um, you know these predictors can be expressive nonlinear function and decision boundaries of x, you know, in the case where x is, uh, uh, is actually a real vector. Um, but the score is a linear function of w, okay? So this is cool because, you know, from a pr- there's two perspectives, right? From the point of actually doing prediction, you know, you're thinking about like wh- how does this function operate on x? And you can get all sorts of you know, crazy functions coming out. Um, we just looked at quadratic functions was clearly non-linear but you can do all sorts of, you know, crazy things. But from the point of view of learning, it doesn't care about x. All it sees is Phi of x. In particular, your learning asked the question how does this function depend on w? Right? Because it's tuning w. And from that perspective, it's a linear function w and, um, for reasons I'm not gonna, you know, go into, um, these functions, er, permit efficient learning because the loss function becomes convex, um, which I'll, that's all I say about that. Okay. So, um, so one kind of cool way to visualize what's going on here is when you're going back to our circle as example. So remember we want this, um, two-dimensional classification problem where the true decision boundary is, you know, let's say a circle. So how do we fit that and what does it mean for a linear thing because when you think linear it like, should be a line, right? Um, so here's a kind of a cool graphic. So, okay. So here is, um, these points inside the circle and, you know, it can't be classified. But the point is when you look at the feature map it actually lifts these points into a higher dimensional space. Now I will have three features, right? And- and you know, in this higher-dimensional space, I can actually- things are linear. I can slice it with a kind of a knife. And then, you know, in that high dimensional space if things are cut and what that induces in the lower-dimensional space is, you know, this circle. Okay. Okay, I don't wanna- Okay, so hopefully that was, er, a nice visualization that shows how you can actually get nonlinear machine functions out of kind of essentially linear machinery, right? So someone- the next time someone says, well, you know, um, you know, linear classifiers are really limited, um, and you really need neural nets, um, you know, that's technically false because you can actually get really expressive models out of, er, you know, neural networks- sorry, out of linear models. The point with neural networks is not that they're not- you're more necessarily more ex- expre- They can be more expressive but the fact that they have other advantages, for example, the inductive bias that comes with the architectures and, um, the fact that they are more efficient, ah, when you go to more expressive models and so on. Okay, so- so to kind of wrap up all things, I want to kind of do a you know, simple exercise. So here's a task. So imagine you're doing a final project and you want to, um, predict, you know, whether two consecutive messages in some forum or a chat are, um, where the second one is a response to the first. So it's binary classification, input is two messages, and you're asked to predict whether a second is a response to the first. Okay, so we're gonna go through this exercise of coming up with, um, you know, features that might be or feature templates might be useful to pick out properties of x that might be useful. Um, and we're gonna assume that we're dealing with linear predictors. Okay. So what are some features that might be useful? Let's, um, you know, let's- here's- let's start with a few. Okay. So how about time elapsed, um, between the two messages, is that a useful feature or not? How many of you say yes? Okay, so this information is definitely good. Um, one subtle point is that this time elapsed is a single number, and this number is going to go into the score kind of in a linear fashion, okay? So what does- what does that mean? That means, um, you know, if I double the time, then the score is going to or that can p- the contribution to the score is going to like multiply by 2, right? So think about it, it's, it's kinda like saying them, as I increase the time, you know, the, it becomes linearly more likely that I'm going to be let's say not a response or- or a response. So this is, you know, maybe it's kind of not what you want because, you know, the difference and from that perspective like if you, the time elapsed is like a year then that really kind of dominates the- the score function. Um, and it's like way more likely that it's going to be a response than if it were like one minute, which is kind of not what you want. Yeah, question? Can you normalize it to teach them? Yeah, so the question is, can you normalize it? Um, so you have to be careful with normalization. So you have- if you normalize let's say the span of like over one year. Now, now, there's no difference between like, you know, five seconds and one minute because everything gets squashed down to 0, right? So, er, one way to kind of approach that is to, um, discretize the features. So one trick that people often do is if you have a numerical value which you really kind of want to, um, treat kind of in a sensitive way, you can kind of break up into pieces. So the feature template would look something like time elapsed is between blah and blah. So you can do things like okay is it between zero seconds and five seconds and is it between five seconds and like a minute and between a minute and an hour and an hour and a year or something? And then after that, it doesn't matter. Um, because that will give you kind of more, um, it's more domain knowledge that tells you kind of what things to look out for. That difference between let's say a year and a year plus two seconds is really, you know, it doesn't matter, right? Whereas the difference between one second and five seconds might be significant. So this is all a long way of saying you know if you're using linear classifiers or even if you're using neural networks, I think it's really important to think about how your raw features are kind of entering the system and think about like, if I change this feature by like scaling it up, does the prediction change in a way that, you know, I expect? Yeah, you got a question? So if we approve that second feature right there, er, what prevents us from having let's say, er, if we had a whole sort of 35 seconds from 30 to 40 seconds and maybe so on what prevents us from getting just the entire time? [NOISE] Yeah, so, er, the question like if you have every possible range isn't that like an infinite number of features? Um, so, er, there's two answers to that. One is that even if you did that, you might still be okay because there's probably some, um, if you think about it like discretizing the space of, you know, here is your time elapsed, time, um, elapsed. And you're basically saying for every bucket I'm going to have a feature. Um, it is true that you have an infinite number of, you know, features but, you know, at some point you might just cut it off. And if you didn't cut it off and use a sparse feature representation, um, you don't have to, um, pra- have a pre-set, you know, maximum because remember, most of these features are gonna be zero because the chances of some data point being like, you know, 10 years is going to be essentially you know, nil. Um, another answer is that, um, in general when you have features that, er, have multiple timescales, um, you want a kind of space that will work kind of logarithmically. Um, so you know, one to two, two to four, four to eight, um, so that you can have both kinds of sensitivity in lower events but also, um, kind of cover a large, um, magnitude. Yeah, in the back. Is it possible to learn, like how to discretize the features make it the most important? Um, question is, is it possible [NOISE] to learn how to discretize the, the features? Um, there are- there's definitely more automatic things you can do besides, you know, just like spans specifying them. Uh, at some level though, you have to kind of input the value in a form, like, er, if you've inputted into x versus, let's say log of x, um, those choices often can make a, you know, big difference. Um, but, um, if you use more expressive models like neural networks you can, you know, mitigate some of this. Yeah. I see the value in changing time elapsed, uh, from a number to like a Boolean whether it falls between a range. Why would you wanna retain a, a numerical value for teaching? When would you not wanna discretize it? Yeah, good question. So when would you actually want to not discretize it? Um, [NOISE] so there are- um, essentially when you expect kind of the- the scale of that feature to, um, [NOISE] really kind of matter in, in some, in some sense. So, so, so certainly when you think that some things behave linearly, um, then you just wanna preserve the linear. Or if you think that it behaves quadratically, then you wanna keep the feature but also add a squared term to it. Okay. I wanna maybe move on, um. Uh, these are all good questions, happy to discuss more offline. Um, so some other features might include, the first message contains blank where blank is a string. Right. So maybe things like, you know, question marks are more indicative of you know, things being the second message being a response. Second message contains certain words. Um, um, two messages both contain a particular word. Um, you know, there's cases where, um, it doesn't really m- it's not the presence and absence of particular words in the- in individual, um, messages. But like the fact that they both share a common word, you know, that might be useful. Um, here's another feature which is, you know, two meshes have the, um, some number of common words together. Um, so this feature is kind of interesting because it's, um, there's, you know, the, the- for example you look at this feature, it's how the number of- when I say feature, I actually mean feature template. Um, so for this feature template, um, there are many, many features, one for possibly any number of words. And this again leads to cases where you might have a lot of, um, you know, sparsity and you might not have enough data to fit all the features. Whereas, this one is very compact. That says, I just have to look at the, um, number of overlap. So, er, the, the two messages might contain a word that I've never seen before, but I know it's the same word and I can kind of recognize that pattern. Um, so, you know, there's quite a bit of things you can do to play around with features that capture, um, you know, the intuitions about what might be relevant to your task. Question. Yeah. We have a lot of these sparse features like the working different point here. Is that when we want to do like dimensionality reduction, like knockout some of those many, many features? Um, so the question is when you have a lot of sparse features, do you wanna do dimensionality reduction? Um, not necessarily. Um, so in terms of computation, having sparse features it doesn't necessarily mean that it's gonna be, you know, really slow, um, because there's efficient ways of, um, you know, representing sparse features, um, in terms of, you know, expressivity, one thing that, um, in a lot of NLP applications, you actually do want a lot of features. Um, and you can have a lot more features than you might think you can handle. Um, and because you really wanted, the first orbit is just to, you know, be expressive enough to even fit the data. Yeah. Okay. Let me move on, um, since, you know, I'm running short on time. Okay. So summary so far, you know, uh, we're looking at features. We can define these feature templates which organize these, uh, features, uh, in a kind of meaningful way. And then we talked about hypothesis classes which are, are defined by features. And this defines what is possible f- out of, uh, from learning. Um, and all of this in the context of linear classifiers which incidentally can actually produce these nice non-linear decision, you know, boundaries. [NOISE] So at this point you can actually have kind of enough tools to, um, you know, do a lot. Um, but in the next section, I wanna talk about neural networks because, um, these are even more expressive models which can be, you know, more powerful. Um, um, one thing I, I often recommend is that, um, you know, when you're given a problem, you know, always try the simplest thing. I will always try kind of a linear classifier and just see where it gets because sometimes you'd be surprised at how, uh, far you can get with linear classifiers. And then, and then go and kind of increase the complexity as you need it. I know there's sometimes this temptation to, you know, fire the fancy new shot, um, you know, uh, hammer, but, um, sometimes keeping it simple is, you know, really, really good. Okay. So neural nets, um. There's a couple of ways of motivating this, um, one motivation is, um, you know, comes from the brain. Um, I'm going to use a kind of slightly different, um, motivation which comes from, um, kind of this idea of decomposing a problem, you know, into parts, right. So this is a somewhat contrived example, but hopefully, it'll allow us to build up the intuitions for, you know, what's going on in a neural network. Um, okay. So suppose I am building some sort of, uh, system to detect whether two cars are gonna collide. Okay. So the way it works is I have this car at position x_1 and it's, you know, driving, uh, this way. And then I have another car at position x_2 and it's driving this way. And I want to determine whether it's safe, um, which is positive or it's- if it's gonna collide. Okay. And let's suppose for, uh, simplicity that the true function is as follows. Okay. So it's just measuring whether the distance is at least 1 apart. Now th- this is kind of a little bit, uh, you know, s- like what we did in, uh, the last lecture where we suppose there was a true function and then see if learning can recover that, um, where in practice, obviously we don't know the true function, but this is for- kind of pedagogical purposes. Okay. So just to kind of making sure we understand what function we're talking about. So if, um, x_1 is 1 and x_2 is 3, um, kind of like that on the board, then here, plus 1. So this is like driving in the US. This is like driving in the, er, UK. Um, and that's fine too. Um, but if you're, um, uh, you know, too close together then that's bad news. Okay? All right. So let's think about decomposing the problem, right. Because if you look at this, you know, this, this could be a kind of a complicated, um, you know, function, but let's try to break it down into kind of linear functions, right. Because at the end of the day, neural networks are just a bunch of linear functions with, um, which are stitched together with some nonlinearities. So like there are a kind of linear components that are, um, critical to neural nets. Okay. So one subproblem is detecting if car 1 is to the far right of car 2. Okay. So x_1 less x_2 is greater than or equal to 1. Um, another problem is testing whether car 2 is far right of car 1. And then, um, and then you can put these together by saying, um, if at least one of them is, you know, 1, then I'm going to predict safe, um, otherwise I will predict, uh, not safe. Okay. So here's the kind of concrete examples. So for 1, 3, uh, car 2 is far right of car 1. So that's a 1. You add these up, take the sign, that's plus 1, in the opposite direction it's still fine. And in this, this case both h_1 and h_2 are 0, so that's, uh, bad news. Okay. So this is just kind of trying to take this expression which is a true function and kind of write it in, uh, kind of a more modular way, where you have different pieces corresponding to different competitions. Okay. So now, um, we, we could just write this down, obviously to solve this problem but th- we already knew what the right answer was. But suppose we didn't know what the true function is and we just had data. So, so we don't actually know what these functions are. So can we kind of learn, learn these functions automatically? So what I'm gonna do is I'm, I'm gonna define a feature vector now, um, of x which is gonna be a 1, x_1, x_2. Okay. Um, and then I'm going to rewrite this intermediate subproblem as follows. So x_1 is x_2 greater than 1, is going to be represented as this, uh, vector v_1.v of x, where v_1 is minus 1 plus 1 minus 1. So you can- you pause for a second. Um, you can verify that this is x_1 minus x_2, you know greater than equal to 1. Okay. So this is just another way of writing, um, you know, what we wanted in terms of this like dot product and you can see kind of how this is maybe moving more towards something that looks more general. Yeah. Why is that 1 there? So the question, why is there this 1 here? Um, so this 1 typically is known as a bias term which allows you to, um, not just, uh, you know, threshold on 0, but threshold on, uh, any arbitrary number. So in the linear classifiers that I've, you know, talked about, I've kinda swept this under the rug. Generally, you always have a bias term that allows you to kind of modulate how likely you're gonna pre- predict 1 versus, uh, minus 1. Okay. So you can also do it for h_2. It's the same thing, but just, um, you know, switching the roles of x_1 and x_2. Um, and now also the first sign of final sign prediction, you can write it as follows. Um, now th- these are just weights on, um, h_1 and h_2. Okay? So now, here is the, the kind of the punchline, is, you know, for a neural network, we're just going to leave v_1, v_2, and w as unknown, uh, quantities that we're going to try to, uh, fit through training. Right. We motivated this problem by saying, okay, in this case, there is some choice of v_1, v_2, w that works. But now we're kind of generalizing. If we didn't know these quantities, we can just leave them as variables and we can actually still fit them- fit these parameters. Okay. So, um, before we were just tuning w, and now we're tuning both V and w. V specifies the choice of the hidden problems that we're interested in and w governs how do we take the results of the hidden problems and, uh, come to a final prediction. Okay. So there's one problem here, which is that if you look at the gradient of h1 with respect to v1, um, it happens to be 0, okay? So if you look at, um, the, uh, horizontal axis is v1 dot Phi of x and the vertical axis is h1, um, that function, um, is- looks like the step function, right? Because indicator function of some quantity greater than or equal to 0. It's 1 over here, 0 over here. Um, and remember, we don't like 0 gradients because SGD doesn't work. So the solution, um, here is to, um, take some sandpaper, um, and you, you know, sand out this function to smooth it out and, uh, then you get something that is, um, you know, differentiable. So, uh, the logistic function is this function which is, um, a smoothed out version of this, which, um, rises. So it doesn't hit 1 or 0 ever, but it becomes extremely close. But it kind of, um, goes up in the, in the middle. And you could think about this as, um, a differentiable, um, or I, I guess a smooth version of, uh, the step function, okay? So it kinda behaves and looks like the step function. It serves kind of the same intuition that you're trying to test whether some quantity is greater than 0, but it doesn't have 0 gradients anywhere, okay? And you can double-check. If you take the derivative, then this is actually- has this kind of really interesting nice form, which is the value of the function times 1 minus the value of the function. And the value of the function never hits 0, so this quantity never hits 0, okay? So, so now we can define, uh, neural nets in contrast to linear functions. So remember, linear functions, um, we can visualize it as, um, inputs go in, um, and each of the inputs gets, um, weighted by some, uh, w and you get the score, okay? [NOISE] So this is what a linear- what a linear function looks like. Now, neural networks with one hidden layer and two hidden units, 1, 2, looks something like this where you have, um, these intermediate hidden units, which are the sigmoid function, um, applied or logi- logistic function in this case in, uh, to be concrete, um, applied to, um, this wave vector Vj times Phi of x. So h1 is, uh, going to be taking the input and multiplying it by a vector of- and you get some number here, and then you send it through this, um, logistic function to get some number. And then finally you take the output of h1 and h2 and you, uh, take the dot product with respect to w, and then you get the final score, okay? So again, the intuition is that neural nets are trying to break down the problem into a set of, you know, subproblems where you- the subproblems are the kind of the, the result of these intermediate computations. [NOISE] And you can think about these as like, you know, h1 is really kind of the output of a mini linear classifier. h2 is the output of a mini linear classifier. And then you're taking those outputs and then you're, you know, sticking them through another linear classifier and getting the score. So this is what I mean by, you know, at the end of the day, it's kind of linear classifiers packaged up and strung together. And their expressive power comes from, from the kind of the composition. Um, yeah, question. Phi h sub j when there's like multiple Phi, like, how do you combine them? Uh, the question, how do you get h sub j when there's multiple Phis? There's only one Phi of x. Oh, so this is, this is the first component of Phi of x. So this vector, this, this is a three-dimensional vector, which is Phi of x. And it has three components. Yeah. Yeah. [inaudible] uh, isn't that effectively features, kind of? Yeah. Then they're like, they're like the- like, I mean, some kind of function of the original features that you've put in and they make the new features that are better than the ones before? Yeah. [NOISE] Yeah. So that's my- kind of my next point, which is that, um, one way you can think about it is that the hjs are actually just, you know, features which are learned automatically from data as opposed to having, a fixed, uh, set of your features Phi, right? Because at this layer, w always sees these, you know, hs which are coming through which look like, you know, uh, features. Um, and for deeper neural networks, you kind of just keep on stacking this. So, you know, this output of one set of classifiers becomes the features to the next layer and then the output of that class sort of becomes the features to the next layer, and so on. Um, and the intuition for, you know, deeper networks, um, is that, you know, as you proceed you can, uh, derive more abstract, you know, features. For example, images. You start with pixels and then you find kind of the edges, and then you define kind of object parts, and then now you define kind of, uh, things which are closer to the actual classification problem. Yeah. [NOISE] What if you wanted h2 to develop the exact same value, like, do you have to have a bias to start with? Ah, yeah. That's a good question. So why don't h1 and h2 do, uh, basically end up in the same place because, you know, because of symmetry? Um, if you're not careful that will happen. So if you initialize all your weights to 0 and, uh, or initialize these weights the same way then, um, they will be kinda moving in locks- lockstep. Um, so what is typically done is you randomly initialize. So they're, kinda, you break the symmetry. And then what the network is going to do is it's trying to, um, use- learn auto- it kind of automatically learns these subproblems to, uh, be kind of complementary because you're doing this joint learning. [NOISE] Yeah. Final question then. How do you choose the Sigma function? [NOISE] Uh, how do I choose the Sigma function? Um, so this is- so in general, sigmoid functions are these or activation functions are these nonlinear functions. So the important thing, uh, it's, it's a nonlinear function. Um, I chose this particular logistic function because it's kind of the classic, um, neural net and it looks like the step function, which is kind of, uh, takes the score and outputs, uh, a classification result. I should, you know, responsibly note that, um, these are, um, maybe, uh, less in style than they used to be. And the, the cool thing to do now is to use, uh, what is called a ReLU or a rectified linear, which looks like this. Um, and you might ask, like, why this one? Um, well, there's no one reason, but, um, this, um, this function has less of a kind of this, um, gradient going to zero problem. It's also simpler because it doesn't require exponentials. Um, but there's, um, um, I'm gonna just leave it at that. [NOISE] [BACKGROUND] What- the benefit of this function is, uh, pedagogical reasons and it's a little bit of a throwback too. [NOISE] Um, okay. [NOISE] Yeah, if you read the notes in the lecture slides, there's more details on, like, why you would like change, choose one versus another. Okay, so now we're kind of ready to do neural net learning, right? So- okay, remember we have this optimization problem, it's, the training loss now depends on both V and w, and a training loss remember, is averaged over the losses of individual examples, uh, the loss of the individual example, let's say we're doing regression, is the square difference between y and the function value, and remember the function value is the summation over the- the weights at the last layer, times the activations of the hidden layer, and- and that's the basic idea, okay? And now all I have to do is compute this gradient. Um, so you look at this and you say okay, well, if you get- have, enough scratch paper, you can probably like, work it out. Um, I'm gonna show you, a different way to do this, um, without grinding through the chain rule. Um, so this is going to be based on the computation graph, which will give you, um, insight- more additional insight into the kind of the structure of computations, and visualize what it means, what does a gradient kind of mean in some sense? And it also happens that these computation graphs, is really at the foundation of all of these modern deep learning frameworks like TensorFlow and PyTorch. So, um, this is a real thing. Um, it turns out that we've taught this it, many people still kinda prefer, uh, to grind out the amount. I can't really tell why, except for maybe you're more familiar with that, and so I would encourage everyone to kind of at least try to, um, think about the computation graph as a way to understand your gradients, even though initially it might not be faster. And it's not to say that you always have to draw a graph, um, to compute gradients, but doing a few times might give you additional insight that you wouldn't otherwise get. Okay, so here we go. Um, so functions, we can think about them as just boxes, right? The boxes you have some inputs going in, and then you get some output. That's all a function is, okay? And partial derivatives or you know, gradients asked the question- the following question, how much does the output change if the input changes a little bit? Okay? So for example if we have this function, that just computes two times in1 plus in2in3. Um, you ask the question like, you take input one and you just add a little epsilon. So like 0.001. And you ask hmm, and- and you sti- uh, read out the output, and you say, "Well, what happens to the output? While in this case, uh, the output changed by 2 epsilon additively. Okay? So then you conclude that the gradient of this function with respect to in1 is, is what? [NOISE]. 2. 2, right? Because the gradient is kind of the amplification. If I put an epsilon, then I get 2 epsilon out, the gradient is 2, or the partial derivative. So okay, let's do this one. So if I add epsilon to in2, then I- simple algebra shows I get a- a change in, in3 epsilon, so what's, um, the partial with respect to in2? In3, right? Okay, good. So you know, you could have done basic calculus and gotten that, but I- I really kind of want to stress the kind of interpretation of, you know, perturbing inputs and witnessing the output, because I think that's a useful, um, interpretation. Okay, so now, um, all functions are- well, not all functions are made out of building blocks, but most of the functions that we're interested in, in this class are going to be made out of these- these five pieces, okay? And so for each of these pieces, it's you know, it's a function, it has inputs, a and b, and you pump these things in and you get some output. Um, this, so there is a plus, minus, times, max, and the logistic function. Okay, so on these edges, I'm going to write down in green the partial derivative with respect to the input that's going into that function. Okay? So let's do this. So if I have the function a plus b, the partial derivative with respect to a is 1, and the partial derivative with respect to b is 1, okay? And if you have minus, then it's 1 and minus 1, um, if you have times, then the partial is b and a, okay? Everyone follow so far, okay? Okay so max, uh, what is this? This is maybe a little bit, you know, trickier. Um, so remember we kind of experienced the max last time. So when the max, um, example you have, uh, a formula, just refresh. Uh, uh, so- so remember our last time we had the- we saw the max in the context of, um, uh, the- the hinge loss, right? So you have the max of these two functions, which is this, which means that, um, you know, let's say one is- one is a and the other is b. Um, so if a is greater than b, then the, um, then we need to take the derivative with, uh- sorry, then, uh- Okay, let me do it this way. Okay, um, ig- ignore that thing on the board. So I just have max a of b, okay? So suppose a is, uh, 7 and b is, uh, 3. Uh, okay, so max, uh, a and b and let's say this is 7 and this is 3, so that means a is greater than b. So now, if I change, um, a by a little bit, then that change is going to be reflected by an output of a max function, right? Because this, uh- this region is small and it doesn't matter. And, um, in this case, if I change b by a little bit, then does the output change? No, because like, you know, 3.1, 2.9 is all, the output doesn't change, so the gradient is going to be 0 there. So the max function is partial derivatives, look like this. So if a is greater than b, then this is going to be a 1, if a is less than b, this is going to be a 0 and you know, conversely over here, if b is greater than a, then this is going to be a 1, if b is less than a, then this is going to be a 0. Okay? So the partial of maximum, there's always 1 or 0 depending on this particular co- you know, condition. Okay, and then the logistic function, um, this is just a fact you can derive it in your, you know, free time but I had on a previous slide. It's just like the sigmoid, um, uh, logistic function, times 1, minus the logistic function. Okay so now you have these building blocks, now you can compose and you can build castles out of them. It turns out like all- basically all functions that you see in, you know, you know, deep learning are just basically bail- built- built out of these blocks. Um, and how do you compose things? Um, there's this nice, uh, thing called, the chain rule, which says that, "If you think about input going to one function and that output going into input in a new function, then the partial derivative with respect to the input of the output is just the product of the partial derivative." This is just the chain rule, right? And you can think about as like- you know, think about amplification. So this function amplifies by two times, and this amplifi- this function amplifies by 5, then total amplification is going to be 2 times 5, okay? All right, so now let's take an example, we're going to do, uh, binary classification with the hinge loss, um, just as a warm-up, um, and I'm going to draw this computation graph, and then compute the partial derivative with respect to w. Okay, so what is this graph? Um, so I have w times Phi of X, that's a score, times y, that's a margin, 1 minus margin, um, max of 1 minus margin 0 is a loss, okay? So now for every edge I can draw the partial, uh, derivative, okay? So here remember the partial derivative here is, uh, left-hand-side greater than d or the right- the right branch. So 1 minus margin greater than 0. Um, for minus, this is a minus 1. For a times, this is going to be whatever is over here. Uh, for this times, it's going to be whatever is over here. And by the chain rule, if you multiply what's on all the edges, then you get the gradient of the loss with respect to w. Okay. So this is kind of a graphical way of doing what you, you know, probably wha- what I did last time, which is, um, if the margin is, um, uh, less than- greater than 1, then it's- everything is 0. And if the margin's less than 1 then I'd perform this, uh, particular update. Okay? So in the interest of time, um, I'm not going to do it for the simple neural network. Uh, I will do this in section. But, you know, at a high level, you basically do the same thing. You multiply all the, you know, blue edge, uh, the edges and you get the- the, uh, partial derivatives. Okay. So- so now, you know, we've kind of done everything kind of manually. I wanted to kind of systematized this and talk about an algorithm called back-propagation, uh, that, um, allows you to compute gradients for arbitrary computation graph. That means, any kind of, uh, function that you can build out of these building blocks, you can actually just get the derivatives. So, you know, one nice thing about these packages like PyTorch or TensorFlow is that, you actually don't have to compute the derivatives on your own. It used to be the case that, you know, uh, before these, people would have to crank- implement this derivatives by- by, um, hand, which is really tedious and error prone. And part of why it's been so easy to kind of develop new models is that all that's done for you automatically. Okay. So back-propagation is gonna compute two types of values; a forward value and a backward value. So f_i for every, um, node I is the simply the value of that expression tree. And, um, the backward value, g_i,is going to be the partial derivative with respect to output of, uh, that- the value at that node. Okay? So for example, f_i here is gonna be, um, w_1 times, uh, um, Sigma v_1 times, uh, phi of x. And g of that node is going to be, uh, the, basically the product of all these edges. Basically, how much does this node change the output at the fin- uh, at- at the very top. Okay. So the algorithm itself is- is, you know, quite straight forward. There is a forward pass which computes all the f_i's, and then there's a backward pass that computes all the g_i's. So in the forward pass, you start from the leaves and you go to the root, and you compute each of these, uh, values kind of recursively. Where the computation depends on, you know, the sub-expressions. Um, and in the backward pass, um, you, similarly have a recurrence that, uh, gives you the value of a particular- a g_i of a particular node is equal to the g_i of its parent times whatever is on, um, this edge. Okay? So it's like you take a forward pass, you fill in all the f_i's and then you take a backward pass, and you fill in all the g_i's that you care about. Okay? All right. So section will go through this in, uh, detail. I realize this might have been a little bit quick. Um, one quick note about optimization is that, now, you have all the tools that you can do, you can run SLG on in which doesn't really care about whether you're, um, you're, you know, what the function is. It's just like a function. You have it, you can compute the gradient, that's all you need. But one kind of, eh, important thing to note is that just because you can compute a gradient doesn't mean you optimize the function. So for a linear function, it turns out that if you define these loss functions on top, you get these convex functions. So convex functions are these functions that you can hold in your hand, and, eh, um, and have a one global, uh, minimum. And so if you think about SLG, it's going- going downhill. You converge to the global minimum and you solve the problem. Whereas neural nets, it turns out that the loss functions are non-convex, which means that if you try to go downhill, you might get stuck in local optima. And in general, optimization of neural nets is hard. In practice, people somehow managed to do it anyway and it works. There's a gap between theory and practice which is an active area of research. Okay. So in one minute, I I have to do nearest neighbors. [BACKGROUND] Um, it will actually be fine because nearest neighbors is really simple, so you can do it in one minute. So here it goes. Um, so let's throw away everything we knew about linear classifiers in neural nets. Here's the algorithm. You're training as you store your training examples. That's it. And then, the predictor of a particular example that you get is you're gonna go through all the training examples and find the one which is closest- has input which is closest to your- uh, your, um, input x prime. And then you're just gonna train- you're gonna return, um, y. Okay? So, um, and the intuition here is that similar examples- it's similar inputs should get similar outputs. Okay? So here's an, uh, pictorial example. So suppose we're in two dimensions and you're doing classification and [NOISE] you have, a plus over here. Um, let's do this plus and you have, um, you know, [NOISE] a minus here. Okay? So if you are asking what is the pro- uh, label assigned to that point, it should be plus because this is closer. Um, this should be minus. This region should be minus. This should be plus. And, [NOISE] you know, one kind of cool thing is that is, where's the decision boundary? So if you look at the point that is equidistant from these, and draw perpendicular, um, that's the decision boundary there, um, same thing over here, um, and, uh, so you have basically carved out this region where this [NOISE] is minus and, [NOISE] um, everything here is [NOISE], you know, plus. Okay? [NOISE] Um, in general, this is, um, what I've drawn is an instance of a Voronoi diagram which if you're given a bunch of points, um, the defined regions of points which are closest to that point. And everything in a particular region like this yellow region is assigned the same label as, um, this point here. And this- this is, um, what is called a non-parametric model which means that, the number- it doesn't mean that there's no parameters. It means that the number of parameters is not fixed. The more points you have, the more kind of each point has its own parameter. Um, so you can actually fit really expressive models, um, using that. It's very simple, uh, but it's kind of computationally expensive because you have to store your entire training examples. Okay. So we looked at three different, um, models and, you know, there's a saying that well, and, uh, I guess in school, you- there's three things study, sleep, and party or something, and you have to only pick two of them. Well, so for learning, it's kinda the same. It can either be fast to predict for linear models and neural nets. Um, it can be easy to learn for linear models and uh, um, nearest neighbors or it could be powerful. For example, like neural networks and nearest neighbors but there's always some sort of compromise and exactly what method you choose, um, will depend on kind of what you care about. Okay. See you next time. |
Stanford_CS221_Artificial_Intelligence_Principles_and_Techniques_Autumn_2019 | Logic_2_Firstorder_Logic_Stanford_CS221_AI_Autumn_2019.txt | Okay, let's get started. So before I get into Logic, a few announcements. The exam is tomorrow. Remember that. Next week is Thanksgiving break, so we won't have, uh, any classes. There's no more sections. And after you guys come back from Thanksgiving break on the Monday there's going to be a poster session from 2:30 to 5:30. So there's more details on the website and we'll post more details on Piazza as well. And then finally the day after there is a logic homework due. So that's pretty much it aside from the final report of things that you should keep track of, um, in this class. Um, I want to take a few minutes to talk about CodaLab worksheets. So this is a platform that, uh, we've been developing in our group to help, um, help people do research in a more efficient and reproducible way. And the thing that's relevant for 221 is that, um, you will get, uh, an opportunity to get extra credit by, um, using CodaLab worksheets and also it provides additional compute if you're running low on that as well. I want to give a quick demo to give you an idea of how this works. So if you go over two worksheets that codalab.org you can register for an account. Um, I'm going to demo a kind of a newer interface that you're actually going to see on a website, um, just because that's what's going to be rolled out soon. Um, so let's create a- a worksheet, um, cs221-demo. Um, a worksheet is like a Jupyter Notebook if you are familiar with that. Um, and you can do things like, um, write up, write text. So I'm going to run some sentiment classification. Um, let me try to at least spell this correctly. So let's suppose this, the title is CS221 Final Project. Okay and then you can upload code or data. So I'm going to go ahead and upload, um, the sentiment dataset. Hopefully this sounds familiar to some of you. Um, and then I'm also gonna upload this textclass.py which is a source code. So each of these, uh, resources, data, or code is called a bundle in CodaLab and you can look at the contents of this bond or you can download it and- and so on. Um, it has a unique ID which specifies forever the precise version of this asset. Um, and now the- the interesting thing you can do with this right now is you can run commands. So CodaLab is pretty flexible. You can run basically any command you want. Um, you specify the dependencies, um, that this command will dep- need to rely on. And then you can type in whatever textclass.py, train polarity.train, test polarity.test. Um, and then you can confirm. You can also see over here you can specify how much resources you want whether you want GPUs or you need to access the network, um, and so on. So this goes and creates, um, a Docker image that's actually running, or a Docker container that's actually running this command and you can, uh, visualize the standard output in kind of real time as the command is- is running. Um, and you can see the files that are generated. Um, so for example one of these files is just a JSON file that has the test error in it. So suppose you wanted to, um, visualize your experiments a little bit better because this is kind of just the default information and how much, how big the bundle is and so on. You can, um, go- this is a little bit more advanced but I want to show you how this works. You can define custom, um, schemas. So if you define a schema which is called run, you add, um, just some fields, you can actually specify test error as a custom field. And you say go to stats.json, read it out. And- and then now you use that table to, um, the schema to define this table. Um, you can see this is the test error. Let me make this a little bit nicer and format it to three decimal places. Okay and then you can go and, um, you can modify this command. Um, and as you say rerun maybe I wanted to try some other parameters. This eta is, um, the step size. Let's try some more. You can rerun this 0.2 and so on. So you can fire a bunch of jobs, um, and you can kind of monitor them. So this one's running, this one's created and you can monitor kind of various statistics that you want. So this is generally a good way to, um, just launch jobs and kind of you know forget about it and keep- keep track of all these things. Um, so then you can say, um, larger step sizes are- are hurt accuracy or something. So the idea behind a worksheet like in Jupyter is that you document your experiments as you go along. And so every asset, data, code and bundles a- and the experiments are all kind of treated the same way so that you can go in here and six months later and you know exactly what command you ran to get this result and the exact dependencies, so there's kind of no question. So you should think about this as kind of a Git for, um, experiments. And if you go to the main side, uh, you can actually fire up some jobs with GPUs in them and then there are depending on how many people are using or there might be a queue, or might not. Um, so if you want some extra compute that's a good way to go as well. Question. How much memory can you typically get? How much memory can you typically get. So there's um- so one thing that if you want to, um, find out, uh, so it varies depending on what kind of- of resources are available. But if you type, uh, any sort of command like free you can actually see the exact environment that your job is running. Um, so I think, um, you can get like maybe let's say 10 or 16 gigs of- of memory. Okay. Yeah. Thank you. Any other questions about this? So there's, um, documentation here. And if there's any issues that you run into, file a GitHub request or email me or something, Piazza, um, won't have the highest of you know, you can post on Piazza too but, um, it'll probably faster if you, um, um, submit a GitHub issue because I'll go directly to the team that's working on this. Yeah. Does this only work with, uh, Python? Ah, does this work only with Python. You can run any command you want. So you can C++, Java. Um, it's- it's- [inaudible] Yeah you can run it on Julia. So the thing when you do a run, um, you specify that Docker image which is, basically contains your environment. So if you have, uh Julia probably has Docker images available. We have a default one that has, um, I don't- I'm not sure if it has Julia but it, um, but it has kind of the standard Python TensorFlow PyTorch libraries. Yeah. [inaudible] Yeah. So if you want to install some dependencies, um, there's two things you can do. You can build your own Docker image which takes a little bit of work but it's not too hard or you can, um, if you want to be lazy you can just do pip install here in the command. And for that you have to make sure you turn on network access so you can actually download from PyPy. [inaudible]. Yeah. Yeah you can have the requirements file. Yeah. Does this support pop up windows? For example if you want to [inaudible] . Does this support pop-up windows? No. This is more like a batch run. So the way there's, um, there's several ways you can do this, there's, uh, you can actually expose, um, like a port so you can connect if you're using TensorPort or something you can actually connect to your job on the fly, or you can- actually, there's a way to mount the contents of your [inaudible] running to your local disk and you can run whatever scripts you want. Maybe I'll hold off further questions, you come talk to me afterwards if you're- if you're interested and want to know more. Okay. Just wanted to make that clear that that thing is available, uh, go check it out, um. Okay. So back to, uh, the topic that we've been discussing. Uh, so on- last Wednesday we introduced logic, uh, and remember there's three ingredients of a logic, uh, there is the syntax which defines a set of valid formulas, for example in propositional logic, it's rain and wet as a particular formula. So syntax is- formulas are just, uh, symbols, uh. They have no intrinsic meaning to themselves. The way you define meaning is by specifying the semantics. So we talked about the interpretation function which takes a formula and a model which represents state of the world and returns either true, or false. And the way you should think more generally about a formula is that it carves out a set of models which are configurations of the world where the formula is true. So in this case, there are four possible models, uh, and rain and wet corresponds to this set of models, which are in red here where it's raining and wet. And finally, we talked about inference rules where, if you have a knowledge base which is a set of formulas, what new formulas can be, you know, derived? So one important thing to remember is that these formulas are not meant to kind of replace the knowledge base. These are things which are derived which could be very simple things as, you know, you might- you have a lot of knowledge about the world but you might want on any given context you might know that it's- it's raining which is. So F is much- generally much smaller than the knowledge base in terms of complexity. So for rain and wet, you can derive rain. Okay. So, uh, in general, we run inference. What does it mean to do logical inference? You have a knowledge base and then you have a set of inference rules that you keep on turning and turning and then you see if you produce W, oh, oh sorry, ah, F. So an example, what we saw last time was modus ponens, uh, which says if you have wet in weekday, and wet and weekday implies traffic then you can derive traffic. So the things on the top are called premises and the things on the bottom are called- is the conclusion. And more generally, you have this, a modus ponens inference rule. Uh, so now the question is, what does this inference rule have to do with semantics? Because th- this is just symbol manipulation. You just- you saw the symbols, you produce some other symbols. And in order to anchor this in semantics, we talked about soundness and completeness. So entailment is a property between, uh, a relationship between a knowledge base and a formula, which is given by the models, right? So the models of F have to be a super set of models of KB, that's a definition of entailment, and separately, we have the notion of derivation, which is symbol manipulation. You can derive F given a set of inference rules from KB. And, uh, soundness means that the set of formulas that you derive are always entailed, and completeness means that you can derive all of entailed formulas. So remember this, uh, water glass analogy where this set of things in the glass are, uh, true, uh, entailed formulas and you want to you know, stay within the glass but you don't want to spill over. So, so far we've looked at propositional, uh, logic, uh, which is any legal combination of symbols, propositional symbols, and their connectives. Uh, we also looked at a subset of that called propositional logic with horn clauses where all the formulas look like this. You have and, of a bunch of propositional symbols implies,um, some other propositional symbol. And so there's a trade off here. So we saw that propositional logic, um, is not- if you use a modus ponens in propositional logic, you're- you're gonna be sound, but you're not gonna- going to be complete. There are certain types of formulas whic- which you won't be able to derive. Um, so we could either restrict a propositional logic to only horn clauses, and we showed last time that this indeed is complete, um, or we can say we really want propositional logic, the full expressive power. And instead, we're gonna do is this thing called resolution, which we're gonna talk about in this lecture. Okay. So this lecture has two parts, we're gonna talk about resolution for propositional logic and then move on to first-order logic. Yeah. [inaudible] is complete, does it mean that anything we could represent in the propositional logic is resolution, we can still represent it with horn clauses. Uh, so you were talking, asking about this last statement, or- The last two together, are they effectively equivalent? Is there anything I could do with the last one, something I can do with the second last one? Um, so is- the question is, is there anything I can do with the last one? Anything I can do with the previous second to the last one? Uh, it depends on what you mean by do? So these are different statements about expressive power and inference rules. Uh, propositional logic subsumes propositional logic with only horn clauses, uh. So you could just say, I only care about propositional logic. But it's turned out this is going to be exponential time and this is going to be linear time. So there's a trade-off there. Yeah. May I quickly ask for one second? Are we like saying like complete- what kind of level of completeness are we in? So what is completeness? I'm using a very precise way to talk about, uh, of a completeness of a logical system. Uh, a set of inference rules means that anything that is entailed by the semantics of propositional logic is derivable via a set of rules. And a particular set of rules here is modus ponens, for this case and then resolution for this case, yeah. So the completeness is really a property of resolution- of the inference rule with respect to a particular logic, [BACKGROUND] yeah. Any other questions? [NOISE] Okay. So let's dive into resolution now. So let's revisit horn clauses, and try to grow them a little bit. Um, to do that, we're going to take this example, horn clause A implies C, and we're going to write it with disjunction for reasons that will become clear in, in a second. Um, I'm gonna write, uh, some of these identities on, you know, the- on the board. Um, so these are things, which are, um, hopefully, you, uh, you know. Um, I also wrote this last time. This is just the, um, just true, I guess. Um, because I wanna say definition, but it's not really a definition because the definition is the, um, the, the interpretation function. But you can check the two-by-two truth table, and this is, you know, true. Intuitively, um, P implies Q, really just, is the same as saying either P is false or Q is true. If P is false, then the kind of the hypothesis is false, so it's irrelevant what Q is, and if Q is true, then, um, then the whole statement is true, okay? Um, so what about this? A and, and B implies C. So I can write it as not A or not B or C. So this invokes another, um, identity. So which is that if I have not of P, um, and Q, that's the same as not P or not Q. Okay. So- and there's also another version, which is P or Q negated is the same as P, uh, not P and not Q. So what I'm doing intuitively is pushing this negation, um, pass the, the connective into the propositional symbols. And when I push it passed- on negation pass and it flips to an or, and then when I push a pass on or, it flips to an and. Okay? And hopefully, you guys should be comfortable with this because when you're doing programming and you're writing if statements, um, you should know about that. Yeah? [inaudible]? Yeah. So a good question. So the- what is the order of operations. It's here, it's A and B parentheses implies C. Okay? So if you apply the second identity on the board here, you have A and B, is not A or not B. And then you apply the, the first identity and that thing, um, or C is this, this is the same thing over there. Okay. So now, I'm going to introduce some terminology. Um, first is a literal. So this is going to be either a propositional symbol or its negation. Um, there's a notion of a, uh, clause, which is just a, you know, disjunction of literals. So disjunction means or. So these things are all clauses. Um, and finally, there is, um, a particular type of clause called a Horn clause, um, which I introduced last time. But here, I'm defining a kind of a different light here which is clauses that have at most one positive literal. Okay. So, um, in these clauses, there is indeed only one, uh, positive literal. So these are Horn, Horn clauses. And if you remember from last time, if you have snow or traffic all appearing on the right-hand side, then that has two positive literals which is- which means it's not a Horn clause. So now, I can write modus ponens the following way. So A, and A implies C, which can be written as a disjunction, um, allows me to derive C. And here is another intuition which is that I'm kind of effectively canceling out A and not A, and I'm taking the, you know, the resulting things and putting them on, on the bottom. Okay. All right. So now, let's, uh, introduce a resolution rule. So general clauses could have any number of literals. So this is not a horn clause, but it is a clause. And, um, the resolution rule for a particular- this particular example looks like this, so rain or snow. And if you have not, um, snow or traffic, allows you to derive rain or traffic. Okay. So this is not a Horn clause, right? Because I have two positive literals. Um, and how do we intuitively understand what's going on? So you could say, okay, it's either raining or snowing. And snow implies traffic, which means that it was, it was- it's snowing that I can get traffic. There was not snowing. I still have rain here, so I can, um, I can conclude it's either rainy or trafficking. So in general, the resolution rule looks like this. So you have a clause up here, um, with some P, a positional symbol, and then you have a second clause with not P. And what you can do is you can cancel out P and not P, and then you can take everything else, and then hook them up as a big, um, you know, clause. Okay. So this is a rule. I've kind of sketchily argued that it's a reasonable thing to do. Um, but to really formally verify that, you have to check the soundness. And the way you do soundness, remember how do you check soundness? You go back to the semantics and- of propositional logic, and you verify that that's consistent with what resolution is trying to do. So in this rule, you have rain or snow. The set of models of rain or snow is, um, everything that's not white here. Um, the set of models have not snow or traffic, is everything that's not white over here. And when you, um, intersect them, you get the dark red. And that, that represents your, um, where you think the save the world is if you only have the, the premises. Um, and if you look at the models of the conclusion rain or traffic, it's this green area. And you just have to check that, um, what you derived is a superset of what you know. And again, this might be a little bit counterintuitive, but you should think about knowledge as restriction. Knowledge means that you actually have pinpointed the state of the world to be smaller. So the fewer color boxes you have, the more knowledge you have. [NOISE] Okay? So this is sound. Um, completeness is, um, another, uh, much harder thing to check. Yeah, question? So you mentioned that we wanted to have a superset at the end but not a subset but there's the two top most [inaudible] for snow- allow for snow removing. Yeah. That are not there. Is that because we've eliminated snow? This is, uh, so why are these there? This is- so this, um, this square is only true in rain or snow. Um, this is only true in, uh, not snow or traffics. But remember, the- the way to think about a knowledge base is that semantics is the intersection of all the four- models of all the formulas. So when I have intersected the models of everything up here, I'm only left with the dark red, here. Um, there's just one square in our final green cell, that's not a part of the intersection [inaudible] [NOISE] Uh, there's, well, there's two, these two. Yes. Yeah. Uh, are we allowing for those because of the fact that, we're- that it's- our conclusion is rain or traffic. But I'm just sort of wondering when you're mentioning the super-set versus subset, um, [NOISE] why then the other two squares up on the first row not included? Um, so let's see. Why are the- the ones up here not included? Because they're not part of the intersection. So is your question why are the squares not part of the intersection? [inaudible] Um, so they're not- le- let me clarify. So if you only look at the premises up here, the set of models is this square, this square, this square, and this square. Then you look at the premises or, sorry, the conclusion, and you look at the models, independently, of the premises and you get these six squares. I see. So those six squares are not related to the two that we have beforehand? Yeah. So this is the green is just derived from the, the green here. All right, okay. Yeah. Okay. Good. All right. So, um, it turns out that resolution is also complete and this is you know, kind of the, the big result from the '60s that, um, demonstrated, I even, kind of a, single rule can kind of rule all of propositional logic, um, but you might say, wait a minute, wait a minute, um, there's clearly things that this resolution, uh, rule doesn't work on because it only works on clauses. So what you have- what if you have formulas that aren't clauses at all? Um, so there's a kind of this trick that we're going to, um, do is that we are going to reduce all formulas to clauses, okay? So another definition that is important here is, um, CNF, so it stands for conjunctive normal form. So a CNF formula is just a conjunction of clauses, okay? So here's an example of CNF formula. Um, here's a clause, here's a clause and you can join them. So it's important to remember that, um, so j- just to refresh, this is a CNF formula. It's a conjunction of clauses, each clause is a disjunction of literals, and each literal is either a propositional symbol or its negation, okay? So or is on the inside, um, and is on the outside, and the one way to kind of, make sure you remember that is, a knowledge-base remember is, um, a set of formulas but really it represents the conjunction of all those formulas because you know all the facts in your, um, knowledge base. And, uh, so you can think about a CNF formula is just, um, of knowledge-base where each formula as a clause. Okay. So we can actually, take any formula in propositional logic and we can convert it into an equivalent CNF formula, which I'll show in the next slide. And once we've done that, then, you know, we can use resolution, um, and life is good. Okay. So the conversion, um, is going to be just a six step procedure, um, and, uh, that- I mean, it's a little bit grungy but, um, but I just want to kind of highlight the- the general, you know, intuition. So we have this formula. So this is not a CNF formula, but we're gonna make it one. Okay, so the first thing w- we wanna do is we want to remove all the symbols that aren't, um, ands or ors or negation because those definitely, don't show up in this, uh, in a clause, um, or a CNF formula. So we can use the identity, the first identity on the board to, uh, convert implication into, um, [NOISE] um, on a not and a or, um, you do that for the inner, guy here, um, and now, you only have symbols that you're supposed to have. Um, the second thing is, that remember, the order in which these connectives, uh, is important for CNF. So negation is on the very inside, negation is only allowed to touch a propositional symbol. Then you have, um, or disjunction, um, and then you have and. So we want to change the order so that- that is- is true. So first, we want the push the negation, all the way inside, um, and this is using the De Morgan's laws, so the first, uh, the second and third identities on the board, um, and so we pushed this inside, um, so that now, all the negation is on the- the on the inside, um, we can remove double negation, um, you can check, v- very easy to check that- that's, uh, valid. Um, and finally, so this is not a C- a CNF formula, it might look like one but it's not, um, if you turn your head upside down, it actually looks like a CNF formula. Um, but the reason is that, um, and is on the inside but it really should be on the outside, and to fix that, you can actually, distribute or over and which allows you to say this is summer or biza- bizarre and not snow or bizarre, okay? So now, this is a CNF formula, and then you're done. Um, this is a general set of rules, just to recap, you eliminate bidirectional implication implication to get the symbol inventory, right? and then you move negation all the way to the inside, um, and you're eliminating a spurious negation that you don't need, and then you move any or from the outside to inside the, um, the, the and, okay? So long story short, take any propositional logical formula, you can make it a CNF formula. So without laws of generality, we're just going to assume we have CNF formulas. [NOISE] Okay? Um, another place that CNF- or you might have seen CNF formulas, uh, come up is when you're talking about, um, in theoretical computer science when you're talking about, uh, 3SAT. 3SAT is, uh, a problem where you're given a CNF formula where every clause has three, um, uh, s- symbols and, you know, three literals and you're trying to ts- determine if it's satisfiable, and we know that to be, uh, a very hard problem. Okay. So- so now let's, uh, talk about, um, the resolution algorithm, um, remember, there is a relationship between entailment and contradiction. So knowledge-based entails f is the same as saying knowledge base is incompatible with not f. Like, f really, really must hold. It's- it's impossible that not f, you know, holds, okay? So suppose we wanted to prove that, um, f is derived from the knowledge base, um, what we're gonna do is, do this proof by contradiction strategy, where we're going to say insert not f into the knowledge base, and see if we can derive a contradiction, okay? So you add not f into the knowledge base, convert all the formulas into CNF, and then you keep on re-applying the resolution rules and you, uh, return entailment if you can derive false, okay? So here's an example of what this looks like. So here's the knowledge base, and here's a particular formula, and now we want to know whether KB entails, um, f or not, okay? So you add it, um, add not f into knowledge base, so that's not C, and, um, I'm going to convert this into a CNF. So that only affects the first formula here, um, and then I'm going to repeatedly, apply the resolution rule. So I can take this, uh, clause. Resolution says allows me to cancel not A with A, I get B or C, and then I take B and not B cancel it out, C and I cancel out C or I mean, when you see C and not C, um, that's clearly a contradiction and you can derive false. Which means that the knowledge base entails f, in this particular example. [BACKGROUND] Okay. This also maybe gives you a little bit intuition of the mysteries of defining the goal clause and horn clauses as deriving of blah blah blah and implies false, um, because you can add, um, something that you're trying to prove and you can use modus ponens to see if you can derive false. And if you do derive false then it's, uh, it's a contradiction. All right. So as I alluded to before, um, there is a time complexity difference between modus ponens and, uh, resolution. So for modus ponens, each rule application adds only- adds a clause with one propositional symbol. So imagine you have n propositional symbols, you can really only apply modus ponens n times. So that's a linear number of applications there. Whereas the thing with resolution is that, you can add, uh, each row application can add a clause with many propositional symbols. And in the worst case you can imagine any subset of the propositional symbols getting added and this results in an exponential time algorithm. This should not be surprising because we know that 3-SAT is, you know, NP complete. So, um, unless there was some magic here there's, there's no way to kind of circumvent that. Yeah. [inaudible] preferred? So the question is why is resolution preferred? Um, so you could just, uh, convert everything to CNF and check, uh, do backtracking search or whatever on CNF. Resolution, turns out that all have generalizations, um, to first-order logic which, um, model checking doesn't. Right. So- so remember there's two ways you can go about, you can, um, do basically reduce things to CSPs and then you can solve it, or you can try to use inference rules. So this inference rule doesn't, um, as far as I know, people don't really reuse resolution in propositional logic, but in, uh, first-order logic you kind of have no choice. So, um, I'm thinking that, when you see modus ponens inference rules it's kind of like everything's going to be still down to n to relationships. Yeah. Like sort of NAND and NOR are the universal gates. And so I'm thinking that resolution is like a more production, and, um, more [inaudible]. So of the two can you prefer one to the other? Um, the the question is whether the two are, um, resolution looks like kind of like NAND. Um, there's quite a bit of difference there. Maybe you could talk about it offline. Um, okay, so to summarize, there's two routes here. You can say, I am gonna use propositional logic with horn clauses and be using modus ponens. This is fast but it's less expressive, or I can embrace the full complexity of a propositional logic and use resolution. And this is exponential time, it's slow but it's more expressive. Yeah. [inaudible]. Right. What I mean by expressive? I mean the- the latter which is that there's simply some things you can't write down, um, in- in- with proofs using horn clauses, like you can't write down rain or snow at all. Any sort of branching or disjunction you can't do in horn clauses. So in some applications horn clauses actually turns out to be, um, you know, quite en- enough. Um, so these type of horn clauses show up in- in programming languages where you're just, uh, you know, you see some premises and you're trying to, um, derive some other quantity. So unlike in program analysis this is actually quite useful and efficient, um. Okay, so let's move to first-order logic. So what's wrong with propositional logic? I mean, it's already exponential time so, um, you know, you better be pretty good. Um, so remember the point of logic is to, in general, from an AI perspective is to be able to represent and reason with knowledge in the world. So there's a lot of things that, um, we want to represent but might be awkward in propositional logic. So here's, uh, examples. So Alice and Bob both know arithmetic. So how would you do this in propositional logic? Well, propositional logic is about propositions. So this has two propositions, um, which are statements, uh, which are either true or false. AliceKnowsArithmetic and BobKnowsArithmetic, okay? Fine. So what about all students know arithmetic? How would you represent that? Well, um, you probably do something like this, where you say, okay if Alice is student than AliceKnowsArithmetic, and Bob is student then BobKnowsArithmetic, and so on. Um, because all propositional logic can do is from reason about statements. So what about this? There's Goldbach's conjecture. Every even integer greater than two, is a sum of two primes. Um, so good luck with that. Um, you might have to write down all the integers which there are a lot of them. So propositional logic is clunky at best and not expressive, um, and worse, what's missing? Um, when we have knowledge in the wor- in the world, it's often more natural to think about there as being objects and predicates on these objects, um, rather than just, um, opaque propositions. So AliceKnowsArithmetic, actually, has more internal structure. It's not just a single proposition that has nothing to do with anything else. It has notions of Alice, and knows, and arithmetic in them. And finally, once you can decompose a proposition into parts, you can do fancy things with them. You can use quantifiers and variables. For example, all is a quantifier that applies to each person and we want to do that inference without enumerating over all the people, or all of the integers. Okay, so I'm going to talk about first-order logic, going through our plan of first talking about the syntax, then the semantics, and then inference rules. So I want to warm up with just, um, some examples. I'm not gonna do as rigorous of a treatment of first-order logic as propositional logic because, um, it gets more complicated and I just wanna give you an idea of how it works. So Alice and Bob both know arithmetic. This is going to be represented as, um, Knows alice, arithmetic and Knows bob, arithmetic, okay? So this is, er, there are some familiar symbols like and, and now the proposit- uh, the propositional symbols have been replaced with these more structured objects. And all students who know arithmetic gets mapped to this where now have this quantifier for all x student of x implies knows x arithmetic. Okay, so a bit more formally, so there's a bunch of definitions I'm gonna talk about. So first-order logic. So I mean, in first-order logic there's two types of things. There's terms, and then there's formulas. In propositional logic, there only formulas. So terms are, uh, expressions that refer to objects. So it could be a constant symbol, um, it could be a variable, or it could be a function applied to some other terms. So for example, arithmetic is a- is just a constant, it's, um, let's think about it as a name. Um, there are variables like x, um, which I'll explain later, um, and there's functions of terms. So 3 plus, uh, x would be represented as sum of 3 of x, okay? Remember these are just symbols. Um, and, uh, formulas refer to truth values, so there's atomic formulas or atoms. Uh, so this, uh, atomic formula is a predicate applied to, um, terms. So knows, x is a term, arithmetic is a term, therefore, a pre- and knows is a predicate, so knows, x, arithmetic, is an atomic formula. Um, so atoms are supposed to be indivisible but here there's a substructure here. So maybe you can think about these subatomic particles of that, if that is useful. Um, there's connectives as before. So what we're doing right now is, you're taking these atomic formulas, atoms, and they behave like propositional symbols. So given these atoms are generalizations of propositional symbols we can string them together using any number of connectives, as we've done in propositional logic. And then finally, we have quantifiers applied to formulas. Which means that, if you have a formula with a variable in it, um, we can stick a quantifier over these variables to, uh, specify how the variable is meant to be interpreted. Okay, so there's connectives, um, and, um, quantifiers. All right. So let's talk about quantifiers. Quantifiers are in some sense, the heart of why first-order logic is, you know, useful. And there's two types of quantifiers; universal quantifiers and existential quantifiers. So universal quantifiers, you should think about as just glorified conjunction. So when I have for all x P of x, that's really like saying P of A and P of B and P of C and for all the constant symbols. And existential quantifiers are a glorified disjunction when I say there exists x such that p of x holds, that's like saying P of A or P of B or and so on so on. So I'm cheating a little bit because I'm only- I'm still talking about the syntax of first order logic but I can't resist but give you a little bit of intuition about what the syntax means. I'm not formally defining the, the interpretation function here but I'm just trying to give you an idea of what the symbols, um, correspond to. So here are some properties. Um, so if I push a negation through a universal quantification, then that goes on the inside and the- for all becomes and exists. Does this sound familiar to people? Wh- what is the name for this kind of thing? Yeah, it's just the Morgan's law about applying to first-order logic as opposed to propositional logic. And it's really important to remember that, um, the order of quantifiers matters. All right. So for all exist is very different from exists for all. Okay. So, um, one more comment about quantifiers. It will be useful to be able to convert natural language sentences into uh, you know, first-order logic. Um, and on the assignment you're gonna do a bunch of this. But so this is kind of- there's an important distinction I want to make. So in natural language, you talk, have, um, quantifiers in natural language are words like every, or some, or a. And so how do these get represented in, um, in uh, formal logic? Uh, every student knows arithmetic. Um, every generally refers to for all. So you might write something like this, but this is wrong. So what's wrong about this? [inaudible] Sorry say again. Not every [inaudible] Yeah. So the problem is that what does this say? This one says everyone's a student. For all X, X is a student and for all X, um, X knows arithmetic. So it's basically saying everyone's a student and everyone knows arithmetic which is different. So what it really should be is implication. All right. So for anyone that's not a student I don't, I don't care in terms of this assessing the validity of this formula. And only if someone's a student then I'm going to check whether that student knows arithmetic. Okay. So what about existential quantification? Some knows student knows arithmetic. This is student of X and knows X arithmetic. So those are different connectives. And a general rule of thumb is that whenever you have universal quantification, it should be implication, and whenever you have existential quantification, it should be, um, an and. So of course there's exceptions but this is a ge- this is a general rule. Okay. So let me give you a few examples just to get you used to thinking about quantifiers. So imagine you want to say there is some course that every student has taken. So what- how is that? So there is, there is some course, so there should be exist Y or Y is a course that every student has taken. So every is a for all X and, um, here I want student implies takes XY. Okay. Remember, uh, exist has usually has and, and for all has implies. Okay. What about um, Goldbach's conjecture? Every integer is greater than, greater than 2 is the sum of two primes. This is every even integer, so every even integer greater than 2 implies that what about these? This is a sum of two primes. So notice that there are no maybe explicit hints that you need to use X essential. But the fact that these two primes are kind of under-specified means that, um, there should be exist. So there exists Y and Z such that both of them are prime and the sum of Y and Z is X. Uh, and finally, here's a statement. If a student takes a course and the course covers a concept then the student knows that concept. Uh, whether that's true or not uh, is a different matter but this is a valid formula, and it's- can be represented as follows. So one other, you know, piece of advice is that if you see the word if, that generally suggests that there is a bunch of universal quantifications. Because if is kind of like saying there's a general rule and universal quantification says like in general something, you know something happens. Um, so this is for all X, all Y, all Z. Um, if you have a student and takes some course and that course covers um, some uh, concept Z, then uh, that student knows that uh, concept. Um, I guess technically, there should be uh, also and concept of Z in there. But let's run into getting complicated. Okay. Any questions about first-order logic, what the syntax is and any of these intuitions that we're having for it? Yeah. [inaudible] why you don't use equal sign instead of just equals or is that just to find [inaudible] So the question is: Why don't we just use the equal sign? So I'm being a little bit uh, I guess cautious and, you know, following the strict syntax where you have functions that just take or- it gives you, it shows you the structure of the logical uh, expressions more. So now, in, in certain cases, you, you can use syntactic sugar and you can write equals if you want. But remember the point of logic is not to be able to write these things down manually and reason with them, um, but to have a very kind of primitively built system of formulas that you have general rules like resolution that can operate on them. Okay. So let's talk about the semantics of first-order logic. So in propositional logic, um, a model with something that maps propositional symbols to truth values. In other words, it's a complete assignment of truth values to propositional symbols. So what is this in first-order logic? So still we're going to maintain the intuition that a model is supposed to represent a possible situation in the world. Um, so I'm gonna give you of- kind of some gra- graphical intuition. So imagine you only have unary and binary predicates. So these are, um, predicates that only take one or two arguments. Then we can think about a model as being represented as a graph. So imagine you have three nodes, these represent the objects in the world. So objects are kind of first-class citizens in first-order logic. And these are labeled with constant symbols. So you have Alice, you have Bob and Robert and you have arithmetic here. And then the directed edges are going to represent binary predicates. Um, and, and these are going to be labeled with a predicate symbols. Um, so here I have a knows predicate that applies to 01, 03. Another knows predicate that applies to 02 or 03, and a unary predicate here that applies to only 01. Okay. So more formally, a model in first-order logic is a mapping that takes any- every constant symbol to, um, an object. So Alice goes to 01, Bob goes to 02, arithmetic goes to 03. And it maps predicate symbols to tuples of objects. So knows is a set of pairs such that the first element of the pair knows the second element of the pair. Um, I'm skipping function symbols just for simplicity but you would define them analogously as well. Okay. So that is our model. It's a little bit more complicated than propositional logic because you have to define something for both, um, the term, the, the constant symbols and the predicate symbols. So now to make our lives a little bit easier, I'm going to introduce a restriction on model, as motivated in the following example. So if I say John and Bob are students, um, then in your head you might imagine, well, there's two people John and Bob and they're both students. But there could be technically only one person whose name is both John and Bob or someone who's anonymous and doesn't have a name. And there's two simplifications that'll rule out, um, W2 and W3. So unique names assumption says that an object has most- each object has at most one constant symbol. And domain closure says that each symbol has at least one constant symbol. So the point of this restriction means that constant symbols and objects are, um, in a one-to-one relationship. And once you do that, then we can do something called propositionalization. And in this case, a first-order logic is actually just a syntactic sugar for our propositional logic. Um, so if you have this knowledge base in first-order logic, um, student Alice and Bob- student of Bob for all, all students are people and there's some creative student, um, then you can actually convert very simply into propositional logic by kind of unrolling, it's like unrolling your loops in some sense. So we just, um, have student Alice implies person Alice. Student Bob implies person Bob. And because there is a finite set of pro- of constant symbols it's not going to be like an infinite set of formulas. There might be a lot of formulas but, um, it's not going to be an infinite set. Okay. So the point of doing this is now you can use any inference algorithm for propositional logic for first-order logic. Okay. So if you're willing to make this restriction, unique names and domain closure, that means you kind of have direct access to all the objects in, in the world via, via your um, constant symbols in which case you've- you're just propositional- you just have propositional logic. Okay. So why might you want to do this? Um, so first-order logic as, as a syntactic sugar still might be convenient. You might still want to write down your expressions in first-order logic, um, and have the benefits of actually having, um, you know, propositional logic where the inferences in some sense are much more developed. Um, but later we'll see that, um, there are some cases where you won't be able to do this. Okay. So that's all I'm gonna say about the semantics of, of first-order logic. Um, so now let's talk about inference rules. Okay. So I'm gonna start by talking about first order logic with horn clauses. And we're gonna use some generalization and modus ponens and then we're going to move to a full on first-order logic and talk about the, um, generalization of resolution. Okay. So, um, let's begin by defining definite clauses for first-order logic. So remember a definite clause in propositional logic was, uh, conjunction of propositional symbols implies some other propositional symbol. And now the propositional symbols are now these atoms, atomic formulas. And furthermore, we have might have variables so we're going to have, uh, universal quantifiers on outside. So intuitively you should think about this, uh, as a single template that gets real if you were to propositionalize, it would be, uh, a whole set of definite formulas in propositional logic. So thi- another way to think about this is that this single statement is a very compact way of writing down what would be very kind of cumbersome in, uh, propositional logic because you would have to instantiate all of the possible symbols. Okay. So here's a formal definition. So a definite clause has the following form. You start by a se- having a set of variables, which are all universally quantified and then you have atomic formulas, which are all conjoined implies, um, another atomic formula. And these atomic formulas can contain any of these variables. Okay. So now let's do modus ponens. So here is a straightforward generalization of modus ponens. You have some atomic formulas a_1 through a_k that you pick up and then you have a_1 through a_k implies b and then you use that to derive b. Okay. So it says the first attempt, so you might, uh, see my catch on the fact that this actually won't work. So why doesn't it work? So imagine you have P of Alice. And then you have for all x, P of x implies Q of x. Um, so the problem is that you can't actually infer Q of Alice at all. Because P of x here and P of Alice just don't match. This is supposed to be a1. This is supposed to be a1 and P of x and P of Alice are not the same a1. So this is kind of important lesson because remember these inference rules don't know anything. They have no kind of intrinsic semantics. There's just pattern matching, right? So if you don't write your patterns right, then it's just not going to work. But we can fix this. And the solution involves two ideas substitution and unification. So substitution is taking a formula applying, uh, find and replace to generate another formula. So if I want to replace x with Alice, apply to P of x, I get P of Alice. I can do two find and replaces, x with Alice and y with z. And I am going to replace x with Alice and y with z. Um, and so in general, a substitution Theta is some mapping from variables to terms and substitution Theta of f returns the result of just performing that substitution on f. So it generates another formula with these variables replaced with these terms. So a pretty simple idea. Okay. Unification takes two formulas and tries to make them the same. And to make them same you have to do some substitution, so it returns what substitution it needed to do that. Okay. So here's an example. Knows Alice, arithmetic knows, x arithmetic. These expressions are not syntactically identical. But if I replace x with Alice, then they are identical. So that's what unification does. So what about this example, how do I make these two identical, I replace x with Alice and y with z. And what about this one, I can't do anything because I can't- I can only remember substitution only can replace variables with other things. It can't replace constant symbols. So it can't replace Alice with Bob, so that just fails. Um, and then things can get a little bit more complicated when you have functional symbols. So here to make these the same I need to replace x with Alice and then y with F of x but x has already been replaced with Alice. So I need to make this y goes to F of Alice. Okay. So to summary- summarize our unification takes two formulas f and g and returns a substitution which maps variables to terms, um, and this is the most general unifier. Which means that if I unify x and x, I could also replace x with Alice and that'd be fine, but that's not the most general thing. I want to substitute as little as possible to make two things, um, equal. Um, so unify returns a substitution such that. And here's an important property. If I apply that substitution to f, I get identically the same expression as if I apply Theta to g. And if I can't do it, then I just fail. Okay. So now yeah, question Can we say that F of x, like what should we say to F of x is it a variable or is it a formula? So is the question is f, f of x, is this a variable or, uh, a formula? So f of x, f is a function, uh, symbol. So it takes a term and returns a term. So the technical term f of x is a term, uh, which represents an object in the world. Um, and you can check that, um, knows is a, is a predicate, so it needs to take, uh, terms. So f of x is a term. Okay. So now with substitution and unification, we can now revise our modus ponens to make our work. So, um, I'm going to have a1 prime through ak prime which are distinct syntactically from ath, a1 through ak. And what are we going to do is try to unify the primes that are not primes into some substitution, and once I have the substitution, I can apply this to b, uh, and derive b prime, and that's what I'm going to write down. Okay. So let me do go through this example now. So suppose Alice has taken 221, and 221 covers MDPs =, and I have this general rule that says if a student takes a course and a course covers topics, then that student knows that topic. So I need to unify this, uh, takes Alice 221, covers 221 MDP with this abstract version. And when I unify, I get, um, the substitution to be x, needs to be replaced with Alice, y with 221 and z with mdp, and, um, then I can derive, uh, I'll- and then I take this, uh, Theta, and I apply that substitution to Knows x, z, and I get, um, Knows Alice, mdp. So intuitively, you can think about a1 prime and- to ak prime. These are concrete- this is concrete knowledge. You have about the world. This is a general rule. So what the substitution does is it specifies how the general variables here are to be grounded in the concrete things that you're dealing with. And now, um, this final substitution, uh, grounds it out, rounds this part into, uh, the concrete symbols, in this case alice 221, mdp. Okay. So what's the complexity of this? Um, so each application of modus ponens produces an atomic formula, just one, not multiple ones. So that- that's the good news. And if you don't have any functions, uh, symbols, uh, the number of the atomics formulas is, uh, most the number of constant symbols to the maximum-predicate-arity. So in this case, if you have like 100 possible values of x, 100 possible values of y, 100 possible values of z, that will be the number of possible, um, formulas that you might produce is 100 to the 3rd. So, um, you know. That, that could ima- you could imagine this being, um, a very, very large number, so its exponential in the arity, but if arity is, you know, let's say 2, then you know, this is not too bad. It's not exponential. Um, so that's, that's the good news. The bad news from a complexity point of view is, if there are function symbols, then actually, um, that's infinite. I guess not just exponential time, it's like infinite, um, infinite time. Because the number of possible formulas that you could produce is, uh, kind of unbounded. And when you might have something like this, well, if you remember one of the functions could be sum. So you could have like sum 1 and sum of 1, and sum of 1, and, and so on. So you can kind of essentially encode arithmetic using this, uh, first-order logic. Okay. So, so here's what we know. So modus ponens is complete for first order logic with only Horn clauses. Right. So what is completeness mean? It means that anything that's actually true, that's entailed. There exists a derivation, a way of apply modus ponens to get there. But the bad news is that it's semi-decidable. Um, this means- so first-order logic, even when you restrict it to Horn clauses is semi-decidable. This means what? If f is entailed, forward inference, um, using, um, uh, the complete inference rules, in this case a modus ponens, will eventually prove or derive f in finite time, because it's complete, so eventually you'll get it. But if, if it's not entailed, we don't know. We don't know when to stop because it could go and just keep on going on and on, and actually no algorithm can, uh, show this in finite time. So there's a complexity throughout the result that says, um, it's not just exponential time, but it's actually, there's no algorithm. It's like the- if you're familiar with the halting problem, that's- this is very related to that. Okay. So that's a bummer. Um, but you know, it's, it's not the end of the world because you can still actually, uh, just run, um, of impro- inference, and get a partial result. So you might succeed in which you know for sure because it's sound that, um, it's, uh, the f is entailed. And after a while, well, you just, uh, run out of CPU time and you stop, and then you say I don't know. Okay. So now let's talk about resolution. So we've f- finished talking about first-order logic with, uh, restricted to Horn clauses, and we saw that modus ponens is complete. Um, there's a small wrinkle that you can actually compute everything that you hope for, but that's life. Um, and now we're going to, um, go to resolution. Uh, so remember that first-order logic includes a lot more clauses. So here's an example. So this is all students know something. Um, and the fact that this exists here. Remember existential quantification is like glorified disjunction. So this is like our example of snow- is snow or traffic. Um, so what do we do with this? So we're going to follow the same strategy as what we did for propositional logic. We're going to convert everything just CNF, and then we're going to repeatedly apply the resolution rule. And the main thing that's going to be different is now we have to handle variables and quantifiers and use substitution and unification, but the structure is going to be the same. So the conversion to CNF is, um, a bit, um, messy and gross and slightly non-intuitive. Um, but I just want to present it so you know what it looks like. Um, so here is a example of, um, not a CNF formula. Um, so what does this say, just you know the practice, um, that says for all x, um, so if anyone who loves all animals, um, is loved by someone. Okay? And what we want to produce is the final output is this CNF formula, which again, CNF means a conjunction of disjuncts, and each disjunct is, um, uh, atomic formula or atomic formula that's been indicated. Um, and here we see some, uh, functions that have emerged called Skolem functions which I'll explain later. And that's- that's basically it. So we have to handle variables and we're going to have to handle somehow. And the way we do this is we- remember there's no quantifiers that show up here. And by default, everything is going to be universally quantified. Which means that the existential quantifiers have to go away and the existential quantifiers get converted into these functions. Okay. All right. So part one. So there's again, the six, or I can't remember, six to eight step procedure. We start with this input. What is the first thing what I wanna do? We wanna remove all the symbols that don't- shouldn't show up. Get our symbol inventory correct. So we eliminate implication. This is the same as, you know, before. So here is this thing implies this thing, and we replace that with not the first thing or not the second thing. So now, the expressions are more gross but it's really the same rule that we- identity that we were invoking before. It would do that for the inner expression. We push the negation inwards so when it touches the atomic formulas it eliminates double negation. So this is all old news. And something new here is we're going to standardize the variables. So this step is technically not necessary. By standardizing variables, I just mean that, you know, this Y- this Y are actually different. It's like having two local variables in two different functions. They have nothing to do with each other. Because we're gonna remove quantification later, I'm just gonna make them separate. So this y gets replaced with a z. Okay. So now I have this. I'm going to replace existentially quantified variables with Skolem functions. Okay. So this requires a little bit of explanation. So I have exist z loves z of x. Okay. And this existential is on the inside here. So of this universal quantifier. So in a way, z depends on x, for every x I might have a different z. So to capture this dependency, I can't just drop exist z. What I'm gonna do is I'm going to capture the dependency by turning z into a function, and the same thing happens over here. I have exist y and I replace this lowercase y with a big Y that depends on the variables that are universally quantified outside the scope here. Yeah? [inaudible] So loves all animals is on the- in the I guess the first part. So everyone who likes all animals is loved by someone. So this is the someone part. [inaudible] Because here I push the negation inside. Yeah. Yeah. So remember, when I push negation past for all, it becomes a exists. Okay. So now, I can distribute or over and to change the order of the- of these connectives so that because in CNF I want a conjunction of disjuncts not disjunction of conjuncts. And finally, I just ditch all the universal quantifiers. Okay. Okay. So I don't expect you to follow all that in complete detail but this is just giving you a basic idea. Okay. So now we're ready to state the resolution rule. And this should look very familiar. It's the same resolution rule as before. But now all of these things are not propositional symbols but atomic formula. And now, this is not p and not p, but p and not q. And I- because these in general might be different and I need to unify them. And then I will take this substitution return by unification and I'm going to apply it on the result. The same way we did for modus ponens. So here's an example of this of animal or loves, and over here I have not loves or feeds. And what do I do? I try to unify this loves with this not loves and I get this substitution. So u has to be replaced with z of x, and v with x. And that allows me to cancel these now. Now I've made them equal. And now I take the remaining parts and I apply the substitution. So this feeds u of v becomes feeds z of x and x. Okay. So there's a bit more intuition I can provide but this does become a little bit abstract and you just kind of have to trust that resolution is doing its job. I personally find it kind of difficult to look at intermediate stages of logical inference and really get any intuition about the individual pieces. But- but that's why you define the principle, is to prove that they're right and then you trust that logical inference does the right thing. Okay. To summarize, we've talked about propositional logic and first-order logic. So for inference in propositional logic, you could just do model checking which means that convert it to a CSP and solve it. In first-order logic, there's no way to enumerate all the possible infinite models. So you can't do that. But in certain cases you can propositionalize and you can reduce first- order logic to propositional logic, in certain cases. Or you can stick with inference rules. And if you stick with inference rules, you can use modus ponens on the Horn clauses or you can- if you don't want to restrict your horn clauses, you can use resolution. And the only thing that's different about first-order logic here is the plus plus which means that you have to use unification and substitution. Okay. Final takeaway is, you know, there's a lot of kind of symbol manipulation on the details here but I wanted to kind of stress the importance of logic as expressive language to represent knowledge and reason with it. And the key idea in first-order logic is the use of variables. So these are very not the same notion of variables as in, in CSPs those variables are propositional symbols which are like the simplest thing and logic. So in logic, first-order logic, we've kind of gone up of a kind of a layer in the expressive hierarchy. And variables here allow you to, um, you know, give compact representations to a very, you know, rich thing. So again, that kind of- if you don't remember anything, just remember the takeaway that logic allows you to express very complicated and big things using kind of small formulas. Okay. So that's it. On Wednesday, I'll be giving a lecture on deep learning and there is one and then we have the poster session after Thanksgiving. And then the final lecture that I will give, that will sum everything up. So, okay. I will see you at the poster session, and good luck on the exam. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_15_Classical_Solutions_of_Dirac_Equations.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So I hope you all have had a great spring break. So before the break, we talked about Dirac equation. So we introduced the equation. We showed that it is Lorentz covariant. And then we started talking about the classical solutions of the Dirac equation, because the finding the complete set of solutions for Dirac equation is the necessary step for quantizing it. So let us consider the classical solutions of Dirac equations. And recall the Dirac equation is given by the following. So this gamma matrices-- so these gamma matrices are 4x4 matrices, and psi is a four complex vector. And we already just from setting up the Dirac equation and the properties of the gamma matrices that this should have solutions proportional to plane wave with k square equal to minus m square. So our goal is to find the pre-factors. So here, we separate into two kinds of solutions. So this just reminds you-- we discussed that at the end of last lecture-- we call uk corresponding to the positive. So we always take k to have positive time component. And so this is the positive energy solution. Then we have also so-called negative energy solution. So these are just names. Does not really means that they have positive energy or negative energy. It just means that the sign here. So now, if you plug this into the Dirac equation, and then you get the algebraic equation for u. You get the algebraic equation for u, and then you find i k slash m u equal to 0, and the i k slash plus m v equal to 0. So u and v satisfy those equations. And so I suppose you're already familiar with the rotation-- so this k slash. So those equations, in principle, you can just solve them. So these are just linear equations. And yeah, in principle, you can just solve them. But still there are tricks how we solve them so that it's mostly transparent. So there are two methods of finding the u and k. So one method is we first go to the rest frame-- say we take k mu, say, to have 0 spatial momentum. And then the k mu will be just equal to m 0. So in this case, those equations become particularly simple. And also before we solve them, actually we should also specify the gamma matrices we need to use. So the gamma matrices we will use is given by the following. Again, each are the 2 by 2 block. So this is the gamma matrices we are going to use. And as we discussed before, there are many, many different choices of gamma matrices. And some of them are convenient for certain purpose. And as we will see, this one actually makes the non-relativistic limit more manifest. So it's conveniently reduced to non-relativistic limit. And so in the rest frame, those equations simplify-- essentially, u just becomes the eigenvalue equations for gamma 0. So you find-- say, this equation for u-- so let's denote to the rest frame by 0, and then-- so the equation for u just becomes this in the rest frame, and the v given by this. So essentially, this is just corresponding-- essentially, you see the u and v-- u0 and v0 just corresponding to eigenvalues of gamma 0, because this is gamma 0 times u gives you a constant. It can be-- move that constant to the right-hand side-- it just looks like an eigenvalue equation. And so you can just easily-- you can plug this in and easily solve them, which we already discussed last time. So for example, you find that there are two independent equations. So this is a 4x4 matrix. So gamma 0 is a 4x4 matrix. And actually have two eigenvalues, say-- so each u and v have two independent solutions. So you find that the-- say, for example, one solution is just given by 1, 0, 0, 0. u2. So this-- we all just discussed at the end of last lecture here-- I'm just trying to remind you. So in this rest frame, the solutions are particularly simple. And so essentially, you have two independent solutions. And then you take and take actually arbitrary linear superposition of them because we are solving linear equations-- similarly for v. And you can take the arbitrary linear superposition of these and then you will solve them. And then for general uk and vk, you can just perform a Lorentz transformation. So you can perform Lorentz transformation. Say, the uk is obtained by S lambda uk 0 u0 and vk is given by S lambda v0. So the lambda-- so remember, our notation, S lambda, is a Lorentz transformation on the spinors. So the Lorentz transformation on the spinors. And the lambda is just determined by the Lorentz transformation to take this k0 to k. So for arbitrary-- so we take k0, so we make the Lorentz transformation to k, and then the corresponding uk. Then, it's getting by that. But in practice, if I give you an arbitrary, say-- yeah, give you arbitrary k, and you first work out the lambda, then you have to work out this S lambda. So working s lambda is possible, but again, takes some effort. But there's still a lot of shortcut. We can actually find the solution. So this is one way to do it, and there's alternative way. So before I talk about this alternative way, do you have any questions? So this is more or less where we stopped-- so at the end of-- before the break. Any questions? OK, good. So alternatively, we can actually find a simpler way. So the simpler way is by observing an identity-- you say, if you look at the k slash square-- so you have done something similar in your pset-- if you look at the k slash square-- so this quantity, by definition, is k mu gamma mu k nu gamma nu And then you use the properties of gamma mu and gamma nu, and then you can easily convince yourself this is just giving you the k squared. This just gives you the k square. And then-- because we are looking at on-shell solution, so this is just equal to minus m squared. Then, this means that if I have ik slash plus m-- which-- so up here in the equation for u and v are plus or minus m. And then if I multiply by i k slash minus m-- and then essentially I have minus k slash square minus m square, and this is actually equal to 0 because of this equation. So now we can immediately observe-- so now, we can immediately observe if I take a solution-- if I just-- to solve that equation, you can just write u as say some normalization constant-- N will be a normalization constant-- and i k slash plus m. And then they're acting on some arbitrary u tilde-- some arbitrary spinor. And then this equation-- that equation automatically satisfies. And similarly, v, if I take some normalization constant, and I take minus i k slash plus m-- say some v tilde-- so that equation, again, automatically-- the v equation automatically satisfied because of this identity. And now, we know that there's only two independent equations-- two independent solutions for both u and v, so we can choose a basis for u. And just by taking this to be, say, us 0-- and then this would be a basis-- a complete basis of solutions. So s would go 1 to 2. So because we have-- and for each of them, we have two independent solutions. And so the reason we can choose us here-- you can also check that when you take the m-- when you take the to be-- yeah, so when you take k to be k0-- so you can easily check yourself that this equation-- this essentially just reduced to a us 0. Yeah, this is consistent-- just say when you take that-- you just reduce to this basis. So n is a normalization constant. And so the us vs then provides a basis-- then this is a basis of solutions. And we can be sure these are the complete basis because that's the only-- yeah, essentially, we already know the solution at the rest frame, and we just have the right number of the independent solutions. Yes? STUDENT: How do we know the normalization constant is the same for uv? PROFESSOR: Oh, yeah, here, we don't know. But you can guess why it should be the same. It's just essentially-- so this u-- yeah, because the equation looks pretty symmetric and the normalization constant don't depend on the sign of m, essentially. Yeah, in principle, I should write in 1 and 2. Yeah, just-- but in the end, they are the same. Good. Any other questions? So you can also-- yeah, if you take the conjugate, then you can get the-- what's the u dagger or the u bar or v dagger v bar. So for example, you just take the conjugate, then you find the us bar. So I hope you still remember the notation-- the us bar-- which is the us dagger times gamma 0 and the us bar-- so this is for the conjugate solution-- will be an-- the same n. We can take n to be real-- say, us bar 0, then i k slash plus m. So you just use the properties of the-- so you can also find this. So now you can easily check yourself again using this equation-- you can check that the u bar and the v bar are actually-- u and v are orthogonal in the following sense. And ur bar vs is equal to v bar r us is equal to 0 for arbitrary r, s equal to 1 and 2. So u and v are orthogonal. And we can also -- and then we need to fix the normalization of u and v. So it's convenient to fix the normalization as follows. So we will use the following convention-- say us bar k-- say, ur k u bar and us k equal to i 2 m delta rs. And similarly, vr bar k and vs k - i 2m. So these 2m just for convenience. So it just convention for convenience. And so you can check yourself that actually-- yeah, so different u-- different value of r and s, they're also orthogonal to each other. You can already see already in the rest frame. You can just easily check using those expressions, but you can also check again using this identity that the different value of r and s, they are orthogonal to each other. And when they are the same, and then we normalize the constant to be 2 m. And from here, you can deduce that N is equal to 1 over square root E plus M. So from now on, I will also use this notation for E. So the E-- the energy is omega k. So now, I will also use this notation. Good. So this gives you a complete basis of solutions to the Dirac equations. So now, let me just write them down to give you a little bit intuition about the explicit expression. So with those gamma matrices, we can just plug in here. And we know what the us 0 vs 0. We know n, so in principle, we can write them down explicitly. So you find that in that basis, then the us k is given by the following-- given by the square root E plus M xi s, and the sigma dot k divided by square root E plus M xi s. And the vs k is given by sigma dot k divided by square root E plus M also xi s, and E plus M xi s. And xi s is a 2. So s from 1 to 2, and xi 1-- so psi s are two component vectors. One is given by 1, 0, and the other is given by 0, 1. So that you get-- so you can see in the rest frame, k equal to 0-- essentially, for u, you're just left with the upper half because this is 0. And then we have the 1, 0 and the 0, 1. And then when k equal to 0-- and for v, just left with a lower half. And again, we have the 1, 0 and the 0, 1. So this is consistent with the rest frame result we obtained earlier. Good. Any questions on this? Yes? STUDENT: What are the sig -- what's sigma? PROFESSOR: Oh, sigma just the sigma matrix. So this sigma just Pauli matrices. Sigma 1, sigma 2, sigma 3-- Pauli matrices. Good? So let me just mention a few-- make a few remarks on the solutions. And to be familiar with the properties of the solutions actually will be very useful when later you do Feynman diagram calculations, because the different-- when you involve, say, electron fermions-- and now, it's no longer plane wave. So this u and v will enter your Feynman diagram calculations. And if you are familiar with their properties-- and will help you to do such calculations. So let me just first mention some properties. Make some remarks. So first is that if we consider the non-relativistic limit-- the non-relativistic limit-- so that's corresponding to-- you take the magnitude of k divided by m goes to 0. And then in this limit, E would be approximately equal to m. And then you find the leading order when you expand, say, in-- yeah, so you can expand this in 1 over M. So the leading order, you find that the us k-- so you can just easily do these two. And yeah, so this term, certainly, you can just ignore because E-- so this is M-- so this is k divided by square root of M, so this is higher order than this one. And here, it's the square root of M. So to leading order, you have square root 2m xi s 0. Similarly, vs k equal to square root of 2m leading order 0 xi s. So essentially, what you see in the non-relativistic limit-- for example, if we look at this positive energy solutions-- so this just reduced to arbitrary-- so us k-- yeah, now you can do linear superposition-- just uk just reduce to arbitrary two component complex vector. Yeah, so you just-- so this is just some constant. And as a basis, you just have this basis, and then you can just now do arbitrary-- yeah, just become the arbitrary complex vector. So the should remind you how we describe a spin-half particle in non-relativistic limit. In the non-relativistic limit, the way we describe the spin-half particle-- and it's precisely a two-component vector. So so this is exactly as the description of a spin-half particle. Yeah, the wave function-- yeah, here, you should understand this-- the wave function description of the spin-half particle in non-relativistic quantum mechanics. And as I will not go through here, but you can show in the non-relativistic limit, this solution actually decouples. In the end, when you solve the equation, only one half of the equation-- one half of the solution remains. And actually, this is no longer-- it just gets the positive energy solution. So in that limit, you essentially only reduce to two components. So this is the first remark. So this is the first indication we see that this should likely describe a spin-half particle. The Dirac theory likely describes a spin-half particle. So the second limit we can consider is considered the ultra relativistic limit. So this is a limit which E is much greater than-- energy is much greater than M. And E then will be approximate, say, to k-- to the magnitude of k. And now let's also define the direction of k by just k divided by its magnitude. So we also introduced the k hat is the unit vector along the direction of the k. So in this case, you can do a small m expansion-- just do a small expansion-- this expression for u and v. And then you find the leading order-- us k given by square root of E. And then you have xi s, and then you have sigma k hat xi s. This is for the us. And for the vs, you find-- so that's the-- so you find that solution simplify, and so that's the result you get. So it's not manifest from this expression, but actually, there's something very special in this simplified expression in the ultra relativistic limit. So if you look at-- so the expression for u and v-- these are linearly independent, in the sense that no matter how you do linear superposition of u, you will not be able to get v. And also how you do linear superposition of v, you won't get u. But in this limit, even though it's not clear-- not manifest in this form, but in fact, these two are linearly related to each other. So this can be seen as follows. So if you use the identity that's sigma dot k square-- so this is the analog of this equation for the sigma matrices-- for the Pauli matrices. And this, you can easily-- again, using the property of Pauli matrices, you can convince yourself that this is just giving you k hat squared. And this is actually-- because this is unit vector, this is just 1. So now, if you use this property, this is equal to 1. And then we find-- and then let eta s equal to sigma k-- sigma dot k hat xi s. And then you find that the vs k can be written in the exact form as us k. You just replace eta s by-- xi s by eta s. And then you can have it in the following form-- square root E eta s sigma dot k eta s. So obviously, this is equivalent to that once you plug in the eta s equal to sigma k psi s, because this just gives you that. And when you plug in to here, you get the sigma k square, and you get 1, and again, you reduce to that. But now, this have exactly the same form as this one, except now eta s is a linear superposition of the xi s. Then that means that the vs is a linear superposition of the us. So this means that the vs is a linear superposition of vs. So vs now is a linear superposition of vs-- of us. So they are no longer independent, so they are not linear independent. So actually, if you remember one of the ways we have discussed before, this may not appear a surprise to you, because previously, we already discussed when we derive the Dirac equation. So Dirac equation-- in order to satisfy all the properties for non-zero M, we need four dimensional matrices. So that means psi is a four vector. But we also mentioned that if you in the massless case set m equal to 0-- in the massless case, actually, you can use two components-- just use gamma matrices. So that means that if you solve a massless Dirac equation, you only need-- you only have two independent solutions. And this ultrarelativistic limit is precisely like the massless limit because the mass-- the energy is much greater than the mass, and then you can neglect the effect of mass. So this is consistent-- with that the massless Dirac equation consistent with that-- only requires two components rather than four components. Yes? STUDENT: So in the ultrarelativistic limit, theyre linearly related, but that relationship v bar times u equals 0 still holds, right? PROFESSOR: Yeah, that you have to be a little bit careful. STUDENT: Does that mean that's not to be thought of as an inner product on the spinor space? PROFESSOR: Yeah, so that's a very good question. So you have to be careful a little bit-- and when you take the limit in deriving that relation. Yeah, just you have to be a little more careful in understanding. Yeah, I forgot the exact-- what's exactly happened to that equation, but I remember there's some subtleties. Yeah, you have to be. Good. Other questions? So also there are some other useful relations which will be useful later. So one of them-- so let me also list some useful relations. So why is that if you look at ur dagger and us-- so this is k us k-- again, this is orthogonal between r and s. And then it's given by 2e dot rs. So here it's bar. So the difference here is bar. And so when you have a bar, you have additional gamma 0. And then here is 2m, and then here it becomes 2E. And here, you have additional i because you have a gamma 0 because gamma 0-- remember, gamma 0 square 2 minus 1. So gamma 0 has actually eigenvalue to be pure imaginary, so that's why there's a difference in i here. And similarly, the vr dagger k and vs k equal to 2E delta is. So those expressions to-- yeah, will be very useful later when you actually do calculations. Yeah, I forgot to mention-- yeah, the reason is that-- the reason that limit is subtle when you do that is because you remember the normalization we do here is m, and so you have to change your normalization when you go to the massless case. Yeah, so there's various things you have to be careful. So you have that, and then you also have the relation that the ur dagger k-- so because everything is on-shell so in principle, I can only-- I only need to specify the spatial momentum. So for convenience, let me just specifier the spatial momentum, which is easier in notation. So now it turns out ur and the vs-- so now it's a dagger. When you have bar, they're orthogonal. We have the expression there. But then you have dagger, it turns out it's orthogonal to minus k. You have to reverse your spatial momentum, and this is equal to vr k and us minus k. So this is equal to 0, but this is actually not equal to 0. ur dagger k us vs k is actually not equal to 0. So they're not orthogonal. Only the bar relation is orthogonal, and then this relation is orthogonal. And similarly with the conjugate of this. So those relations you can just check them using explicit form of those expressions. And we have this correction here. You can just check those expression is true. And so I will not go into that. But actually, more remarkably, those expressions-- so when we derive this expression, we actually use the explicit form of the gamma matrices. It depends on the explicit form. So those expressions depend on explicit form of these gamma matrices. So if you have a different set of gamma matrices, then you get different forms. But of course, physically, they're all equivalent. But it turns out that those relations-- so those relations actually are independent of the choice of gamma matrices. So let me just label them by-- let me call it star, and here by star star. So star and star are actually independent of choice of representations of gamma matrices. So they're actually universal for any choice of matrices. So this is very useful because when you do calculations, then you don't have to worry about which choice of gamma matrices you make, and those expressions are true. So you can actually derive those expressions without using gamma matrices-- just abstractly only using the properties of the Dirac equation itself and the general properties of gamma matrices without using explicit representations. Yes? STUDENT: So is the other normalization one dependent on representation? PROFESSOR: Which one? STUDENT: The one with ur bar v. PROFESSOR: No, that also does not depend on representation. Yeah, that also does not-- or this normalization does not depend on the representation. But this expression somehow is-- yeah, that's just our definition. We just define them this way. We define them in this way. So this just follows from those general expressions just from this equation. But this equation is less obvious because without gamma 0, then it's less directly related to the Dirac equation itself. But it turns out this expression also independent of the choice of gamma matrices. Any other questions? I can give you a derivation of this without using gamma matrices. but that take a few minutes, So I will skip that. You can read it in Peskin. So Peskin has a derivation of this. Good? But you should remember this yourself-- just remember it. So the first remark is that often we actually need to use the projection-- so you can also construct projections-- projectors to the positive and the negative solution space-- spaces. So consider-- so the projector to the space spanned by us can be defined as the follows. So let me call it p plus. So this is just from 2i m, so that's coming from the normalization. Then, just sum s equal to 1 and 2 us k and us bar k. So you can easily check yourself because of orthogonal relations-- or because u and v are orthogonal. When you act this on the general solution, we'll just project into solutions proportional that only involve linear superpositions of u. So now, you can check yourself explicitly with this. So just use various-- use property of this u. You can check yourself explicitly with your-- I will not do there. And you can check that this is indeed a projector. If you look at the p squared, it gives back to itself. And with a little bit more effort, you can further show that the p plus can be written in the following form. ik slash plus m divided by 2m. So this requires a little bit more effort so I will leave as an exercise for yourself, or you can get it from the book. And similarly, you can define the projector to the space of v. So the p minus-- you can also define p minus to be minus 2 im because of the minus sign here. So 1 over minus 2 im some s vs k vs bar k. And you can show that this also have a-- can be written as something like this-- minus ik slash plus m divided by 2m. So again, this I will leave as an exercise to yourself. So this is the projector to the space of v's. And then from these two expression, p plus p minus-- you easily see that p plus plus p minus equal to 1. And you see the expression I just erased-- so from the-- taking the product and the p plus p minus is equal to p minus p plus equal to 0. So indeed, they are just projectors-- orthogonal projectors into different space. So this concludes our discussion of the classical solutions of Dirac equations and their properties. So do you have other questions? Yes? STUDENT: So we've seen hints that this corresponds to spin 1/2, but wouldn't it make more sense to just define the angular momentum operator and see whether it has eigenvalues or not? PROFESSOR: Yeah, that you would need to quantize the theory, but we haven't quantized the theory. Yeah, that would be our next step. So once we have found the full set of classical solutions and now we can quantize it. And once quantize it, then we can actually really find this angular momentum. Other questions? And indeed, actually, I think that would be one of your pset problems to show yourself that this is spin-half. Yeah, but yeah. Other questions? OK, good. So now we conclude the discussion. Now we can quantize the theory. Now we can go to the quantum system. So the-- remember, the action for the Dirac is d4x psi bar gamma mu partial mu psi partial mu minus m psi. So this is the action which gives us the Dirac equation. So in order to quantize this, we first-- remember our procedure to quantize a theory. So we need to first find the canonical momentum, and we also need to impose the canonical commutation relation, and then we expand the operators in terms of complete sets of classical solutions, and then that's corresponding to the solutions to the operator equations. So let's try to find out what's the canonical momentum for psi. So as in the complex scalar case, we can treat the psi and psi dagger as independent variables because of the-- yeah. So then the canonical momentum for psi-- if you look at this expression. So you look at the partial 0 psi. Then you find-- so the Lagrangian density-- if you look at that, if I write it bit-- I have the form psi dagger. So this psi dagger gamma 0, and then gamma 0 psi 0 psi, then plus the rest of the terms. So in order to find the canonical momentum, we are only interested in the part of the Lagrangian density which are related to the derivative of psi-- or the time derivative. And so that's the only term involving time derivative of either psi or psi bar or psi dagger. So this gamma 0 squared equal to minus 1-- so this is just i psi dagger partial 0 psi. So now, we can immediately find out that the conjugate momentum for psi is given by i psi dagger, because you take a derivative on psi 0-- partial 0 psi, you just get i psi dagger. But the conjugate momentum for psi dagger is actually equal to what? So what is this? STUDENT: 0. PROFESSOR: Thi is equal to 0 because there is no time derivative on psi dagger. So this tells you-- so this-- so the fact that this-- equal to 0 tells you actually the Dirac system-- the action-- this Dirac system is actually a constrained system. So this is considered to be a constraint, because you can no longer normally-- so remember, when we say from the Lagrangian, you find the canonical momentum, and then you invert the canonical momentum to find the Hamiltonian. But when you have this equal to 0, you cannot invert it. You cannot express your, say, psi or psi dagger in terms of pi or pi dagger. And so this is a constrained. So that means you actually have a constrained system. It means you have a constrained system. And frankly, constrained systems are very annoying. So they have involved lots of-- you have a lot of formalism in order to, say, treat constrained system-- to quantize them, et cetera. But fortunately, in this case, there's actually a simple trick to go around it. And so now, I will not use this full-fledged constrained quantization to do the job. I will just use a simple trick which will reach the same answer. Yes? STUDENT: Is it like the canonical momenta not well-defined, because I could integrate by parts to move around the time derivative because I can move the time derivative from psi to psi dagger, right? PROFESSOR: Yeah, so there's ambiguity. So that's part of the reflection that this is a constrained system. So this is actually typical to have these kind of constraints. It's typical whenever you have a first-order system. So here, it's different from the Klein-Gordon action is because here, we only involving the first derivative-- one derivative rather than two derivatives. And so this is not our standard system. So the way to go around this is the following. The way to go around it is the following. It's now-- so this equation-- if we look at this equation-- so this equation tells you that the canonical momentum conjugate to psi is actually psi dagger. So also here it's a little bit funny, because you remember in the standard story, the canonical momentum is related to the time derivative. And here, there's no time derivative here-- just psi dagger itself. And this is another indication that this is a constraint rather than-- yeah. So anyway, here, it tells you that the dagger is, in fact, the canonical moment of psi. And now, we can just-- now in this equation, we can just try to interpret the psi dagger as the momentum rather than as the configuration variable. So now, let's rewrite the Lagrangian. So the Lagrangian has the following form-- minus i psi bar gamma mu partial mu minus m psi. So this is the Lagrangian density. So we'll rewrite it as the following. So let's first separate the time derivative. So this is psi dagger. So just this term involving the time derivative. And then the rest, I have psi bar gamma i partial i minus m psi. But here, they are no longer involving the time derivative. And now, we just interpret this as the canonical momentum for psi. And then this is just something involving spatial derivative of psi. And I just interpret this as the Hamiltonian for psi. So this expression just involving psi dagger and the psi-- involving some spatial derivative. And so this is essentially just some functions of psi-- pi psi and psi and maybe with some spatial derivatives. So now, you see that this Lagrangian density-- actually, if I interpret the diagram as the canonical momentum, then it has the form of this canonical transformation to go from the Lagrangian density to Hamiltonian. And then I can just interpret this as the Hamiltonian. So now, I can just interpret this as my Hamiltonian. So we just treat-- so now, we just-- so this is-- so psi dagger and psi. And now, this is the full phase space. So this is the momentum and this is the coordinate, so this is the full space, rather than previously, we would interpret them as a configuration space. And then the Hamiltonian just equal to i psi bar gamma i partial i minus m psi. So this is the Hamiltonian density. So we now then have-- now, this just goes back to the standard formalism. Any questions on this? Yes? STUDENT: So can we still interpret psi dagger as a different field than psi? PROFESSOR: it's canonical momentum. STUDENT: So it's no longer two different fields? PROFESSOR: Yeah, it's no longer two different fields. For example, if there are two different fields, then psi and psi dagger, they should commute. But now, since they become canonical momentum, they no longer commute. yes? STUDENT: If you take psi bar and then you bring the, you pull out the gamma 0, isn't this just like the right-hand side of the-- like our earlier version of the Dirac equation-- the one where you like [INAUDIBLE]?? Like, the one with the alpha and beta, like the other Dirac equation we wrote down earlier? PROFESSOR: Yeah, that's right. Yeah, it's very-- essentially just that Hamiltonian. Yeah, that's right. Yes? STUDENT: Is there not a problem with this treating the time differently? Is it no longer Lorentz invariant? PROFESSOR: Yeah, so when you-- yeah, when we do the client encoding equation in the scalar theory, when you quantize, you always have to treat time differently. And then the Lorentz symmetry come out in the property of the state. Yeah, but you always-- when you quantize, you always have to treat time separately. Yes? STUDENT: So now we have the momentum operator, its just the complex conjugate of phi? PROFESSOR: Yeah, exactly. STUDENT: So this like, its still like constrained. PROFESSOR: Yeah, it's a-- yeah, this is just-- yeah, so that's why we call this a constrained system. But now, I'm just giving you a shortcut so that we have to don't have to go through this-- yeah, the intricacy of constrained quantization. So now, the canonical quantization becomes easy. So now, this is my momentum. So now, we just have the canonical quantization. I think today, we may have unfortunate timing. So now, become psi-- is coordinate. And now, we have psi dagger. We have pi psi, which is the momentum. So this, by definition, should give you the delta function, so equal time correlation function. So here, we should-- because remember, this is a four vector, so this also have four components. So let me just now suppress this subscript psi because there's only one variable here now. So here, there's also coefficient-- so this is also four vector beta, and then here should be delta alpha beta. So alpha beta are spinor indices. So if you impose-- and this is equivalent just to psi alpha t, x psi dagger beta t, x prime equal to i delta alpha beta x x minus prime. And so we can now expand psi x in terms of complete set of solutions. So this is the first step of the quantization. So first is-- this is the commutation relation. So the second step-- remember, we can expand the psi in terms of complete basis of classical solutions. So psi x just equal to-- again, we integrate over all spatial momentum. Again, we use this factor for convenience as in the scalar case. And now, we just need to sum over all possible solutions. So here, we have a sum over s equal to 1 and 2. So we have-- then we have ak s us exponential ik x. And then plus bk-- sorry. k should be here-- sorry. bk s and vs exponential minus ik x. So these are the complete-- because complete set of solution are u and v. And we call the coefficients of a and b, so let me call it b dagger as we did for the complex scalar case before. So we-- so the coefficient, we call them a and b. Actually, it doesn't matter. We call it b. Actually, let me just call it b. Yeah, so you can just-- you have u and v, and then the coefficient will be a and b. And so now, you can just plug this. So if this equation is 1 and this equation is 2, and you take the complex conjugate, psi dagger, you can plug it in here to find what's the commutation relation between a and b. So you find that the-- yeah, so plug 2 into 1. So you find the commutation relation ak s ak prime p, then equal to bk s and bk prime t. Then, equal to 2 pi cubed delta st, delta 3 k minus k prime. So now-- and you can also define the vacuum. And all other commutators are 0-- with all others 0. And you can also define the vacuum, as we did for the scalar case, to be annihilated by ak and bk. And now, you can, in principle, build your Hilbert space. Yes? STUDENT: For equation one up there, are you supposed to not have factor pi just because momentum is i psi dagger? PROFESSOR: Yeah. STUDENT: And then-- PROFESSOR: Oh, right. Sorry, there's no-- yeah, thank you. There's no i here. Yeah, because the momentum is i psi dagger. Yeah, there's no i anymore-- just the delta alpha beta. So all look pretty straightforward, but you actually have problems. You actually have problems. So the first problem some of you may be already asking in your head, is that, are we supposed to describe spin-half particles? But if we are suppose to describe spin-half particles, then why do we find the bosons? So these are commutation relations for bosons. It's because they commute. So different k-- sorry, I think I'm missing dagger. Sorry. So different ak, they commute with each other. As we discussed in the scalar case, when the a mu all commute with each other, this is boson. It means the particle you created, you can exchange them and don't-- yeah, you can arbitrarily exchange them. It's symmetric. But fermions, they are supposed to be Pauli principle. So you're supposed to get the minus sign when you exchange them. So this cannot-- this doesn't look to be right. So this is-- the first thing it seems to be we were finding bosons rather than fermions. And the second is that now if you find that the Hamiltonian-- so now, if you try to find the Hamiltonian-- so H is just the integration of this Hamiltonian density. And you can just plug it in-- plug that expression for Hamiltonian density in. Where is my Hamiltonian density? Yeah, plug that in, and then you will find the-- yeah so you find just this just given by i d3 x psi bar gamma i partial i minus m psi. So now, if you plug them in, you just plug them in-- plug this mode expansion in-- and then you find actually-- then you find the following expression-- omega k. And you sum over s from 1 to 2. And then you find it's given by ak s dagger ak s minus bk s dagger bk s, and then up to some infinite constants. So now, you see something problematic here. So what's problematic you see? Yes? STUDENT: The energy can be arbitrarily small. PROFESSOR: Yeah, energy is not positive definite because of this minus sign. So this minus sign tells you that if you excite a lot of these b particles, you can make the energy as negative as you want, and then the theory don't make sense. And then this is a theory with energy unbounded from below. It's a theory with energy unbounded from below. So that means this vacuum is unstable. So there must be something wrong with this procedure. So it turns out that there's a very quick fix to this problem, and this quick fix was invented by Jordan. So he recognized that if you don't do the standard commutation relation-- if you do instead of commutator-- here, you change it to anti-commutator. Remember, this bracket-- this curly bracket means anticommutator. And that means that here, you change it to anticommutator, and then still with the same thing-- and now this becomes anti-commutator. So let me see. I just want to make sure I get the-- yeah, if we change anti-commutator also from the convention, you change this to dagger. Now you change this to dagger as we did for the complex scalar case. And then you just get this. And now, if you do that, and then this becomes plus sign. And now, not only this becomes plus sign-- and also all different ak for different s and t-- they anti-commute with each other. They anticommute each other. So anticommute each other means that if you create a particle-- for example, ak dagger-- and you create another particle, ak prime dagger on 0, and now these two anticommute. It means when you exchange all, you get the minus sign. That precisely gives you the minus sign in the Pauli principle-- in the statistics for fermions. And so changing this thing to the anti-commutator-- and so solves all your problem. And your energy now is bounded from below, and then you actually now actually describe some particles which obey the Pauli principle. So yeah, I think we still have 1 minute left, but we-- but it's a good time to stop, because otherwise, other things will take more time. So let me just make a couple remark why the commutator-- why here, you need to do something unconventional. So if you look at this Hamiltonian density-- so even without doing this calculation, you can already conclude from here that this H cannot be bounded from below for the following reason. So from the Dirac equation-- let me just erase here. So from the Dirac equation, you have gamma mu partial mu minus m psi equal to 0. So that means that the gamma i partial i minus m psi is equal to minus gamma 0 partial 0 psi. So now, if you plug this equation into here, and then what you get-- you should find that H is equal to i d3 x psi dagger partial 0 psi. So because the psi-- now this becomes a operate equation, and psi satisfies this equation. And now, you find H becomes this form, which is psi dagger times the time derivative of psi. And time derivative of psi-- remember, psi expanded both in terms of the positive energy solutions and the negative energy solutions. So depends on which omega you have. You can have either positive omega negative omega, because this is the first derivative-- a first time derivative of psi. And so this cannot be a positive definite. So this quantity cannot be positive definite. Anyway-- so you have to do something more radical. If these are conventional functions, and then this won't be positive definite. But now, what we did is that we make these into functions which anticommute. When they anticommute, turns out this to be a positive definite quantity. As the ordinary function, this cannot be positive definite. But if they anticommute, then become a positive definite. So let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_24_Elementary_Processes_in_QED_I.txt | [SQUEAKING] [CLICKING] [RUSTLING] [CLICKING] PROFESSOR: Let us start. So last time, we started talking about the following process. So you consider e plus e minus-- so the electron, the positron they scatter and then they will annihilate and that creates some other particles. OK? So b and the b bar-- some final particles can be different kinds of things. So if we draw the Feynman diagram with the time going up, and then the process will be like the following. So the initial particle-- so this is the e minus and this is e plus. And then we have b and the b bar, OK? So we can-- Yeah. So such a process is a very important discovery machine for new particles, OK? Very important for discovering new particles. So let me just mention two examples here. So in 1974 to 1976 at SLAC-- so SLAC is called Stanford Linear Acceleration Center. So they have an electron positron collider. And so there they first discovered the tau particle. And this tau particle-- it's mass is the 1.8 GeV. So when your total energy is greater than twice the 1.8 GeV, then you can, in principle, pair create this tau part-- tau plus and tau minus. And another example is in 1974, again, at the SLAC. And the e plus, e minus goes to c, c bar. So c is the charm quark here. OK, it's the charm quark here. So what they observed is the bound state of the c, c bar-- call it J/psi. So under the mass of the J/psi, it's about 3.1 GeV. OK. 3.1 GeV. So this J/psi-- so this particle was discovered at the same time at Brookhaven through the proton-- through the collision of the protons by our own colleague Samuel Ting. And then they both get the Nobel Prize in 1975, I think. OK. So there was actually a funny story regarding the discovery of this particle. So at Brookhaven, they collide the protons on the fixed target. So that is a much messier machine, because protons contain lots of quarks and gluons and so it's with strong interactions. So if you collide high enough energy, you can create any particles. But since it's involving strong interactions, and strong interactions have many, many particles. So the so the collider at Brookhaven is very, very messy. OK. So in order to find the new particles, it's very, very difficult. OK. But they managed to find this new particle at Brookhaven. But Samuel Ting was famously extraordinarily meticulous. OK, so he wanted to be absolutely right, so he was checking and checking. So during one of the-- Yeah, so this is a rumor. So the rumor said that during one of the checking process-- and somehow the rumor spread that he discovered a new particle. And so the person was doing the experiment at SLAC and called him up. Even the rumor-- the rumor even-- Yeah, so the news leaked. It even leaked the energy scale of the particle they found. And then the person was doing the register at the SLAC then called the Samuel Ting. He said, oh, I heard you have found a new particle. Samuel Ting said, no, I didn't. Yeah. He didn't want to have the-- Yeah, actually it's not very good, because I'm recorded. I'm recorded. So I think this maybe it's not a good spread rumors. If it's not recorded, actually, I can tell the story. Yeah. Anyway so accordingly, then the register said then, oh, you didn't find the particle. Then I can discover it. Because they already knew the energy scale. They already had the energy scale. And this is a very, very clean machine. So electron, positron, when they collide only energy created. So it don't create a lot of junks. And if you add the right energy, you can immediately find the new particle. You don't need to do much work. And so they put the machine at that energy, immediately, the new particle was discovered. And so they were-- Yeah. So they were awarded the Nobel Prize jointly. Anyway. So this was an important discovery, because before that, people didn't believe quarks exist, OK? People didn't quite believe the quark existed, even though there are many models, et cetera. And there are lots of other evidences. But this was the first-- yeah, because C-quark is very heavy. And so c and c bar, they bound not very strongly. And so this is the first direct evidence actually quark existed. And so this actually shocked the community. So this was discovered in October. So they call it the October Revolution. So yeah, it played a very important role for people to accept the existence of quarks. Anyway. So in particular, by looking at the more general process-- so this goes to hadrons. OK, you just collide them. And then you look at it. It's going to hadrons. You can actually show that the quarks have three colors. OK. You can show quark have three colors. And yeah, so we will see that, OK? So before doing that, let's first have to do a calculation, OK? So we have to calculate this explicitly. Let me see. I get my page a little bit. Right. OK. Yeah, so first we have to do a calculation. We have to calculate this process, OK? So after we have calculated this, then it's obvious how this can tell us why the quark have three colors. OK. Yes. AUDIENCE: So since you only see the bound state of the quarks, how can you know for sure that they are a bound state? PROFESSOR: Yeah, so indeed. So that's why, for many years, if you look at the proton, neutron, and the pion, they're very tightly bound. And so even though there were models about the quarks inside there-- so it's not quite-- yeah, but in the c case, c are very heavy. And so they form a weakly bound state. And so in the sense that you're looking at the-- yeah, so here, it's-- in a sense, you can look at the C-quark, because it's a very weakly bound state of the c and C-quark. Other questions? Yes. AUDIENCE: Does weakly bound imply it's usually larger? PROFESSOR: Yeah. Weakly bound just say you can probe with the-- Yeah, you can essentially-- you can probe the internal structure much easier. AUDIENCE: If electron collider is much cleaner than Hadron Collider, then why is like the most important machine right now is the Hadron Collider? PROFESSOR: Yeah. That's good, because electron collider is more difficult, because you have to build a straight line, and because the electron have a much stronger synchrotron radiation if they're moving in the circle. And so that loses a lot of energy. And so that's the electron collider. They should build a strong straight line. But straight line you cannot have very high energy, require very long distance. And so that's why a lot of people built the proton collider, which is easier. And yeah, it's easier to get higher energy. But the next generation of the accelerator of the LHC, people are talking about building an electron collider again. Yeah. Good. So now let's try to calculate this. OK. So essentially we need to calculate this diagram. And yeah, this diagram is very easy. We need to calculate the cross section corresponding to this. So let me just label a label. So this is e minus. So this is e plus. So this would be b. And this will be b bar. So let me label the momentum. So here, let's take p1, r1. And then this would be p2, r2 bar. So tell you this is an antiparticle. And so this one is k1, s1. And this one is k2 s2 bar. OK. And so the amplitude for this will be M. So the cross section. So now, suppose we consider unpolarized process. As we discussed last time, the cross section -- yeah, so d sigma, d omega should be proportional to the spin. So we should sum over all the final states-- sum over the polarization of the final states, because we don't measure individual polarization. But we also should average over the polarization of the initial states. So the result, as we discussed at the end of last lecture, is that you need to take one quarter, come from the average over the spin of these two initial particles, and then, just sum of spins of all the particles, OK? OK? And right. OK. So that's the thing we need to-- so let's just try to first compute the m. So m is given by-- OK, so we just follow our rule. So you follow the fermionic line, OK? So here there's a vertex. So here there's a gamma. So each vertex gives you minus ie gamma mu. OK? So if you follow this one, then we get v bar r2, p2, minus ie gamma mu and ur1 p1. OK? So this is coming from this fermionic line. And from this fermionic line, we start from here. So this would be the u. Yeah. Also, we have this propagator. Let me just write down the propagator for the photon. So let's take xi equal to 1 gauge. The Lorentz gauge with xi equal to 1. And then, lets follow this line. So here, we should have a vertex gamma nu. OK. And then we will have u bar s1 k1 minus ie gamma nu v s2 k2. OK? So this is the full amplitude. So this is a number. OK, so this is a number, because this is a row vector. This is a matrix. And this is a column vector, so this will be a number. And similarly, this will be a number. And then the mu nu indices are contracted with the propagator of the photon. So actually, if I want to be precise, there's actually a factor of i here. But this factor of i is not important, because-- and then the q-- so the q is the momentum of the photon here. So the q should be the sum of the initial momentum. OK, so q should be equal to p1 plus p2. And also this means that the q square is equal to minus s, OK? Remember s is defined to be the minus p1 plus p2 square, OK? So now we can write this amplitude using a shorthand notation. So if I forget all this momentum dependence, and then this is equal to minus ie square divided by s, OK? If you cancel all the i's, and then we have two terms-- v bar r2 gamma nu and ur1 and the u bar s1 gamma nu vs2. And now we need to-- then what we need to do is we need to square it and sum over all the spin. And then we plug the resulting expression into the formula we derived for the cross section we derived last time, OK? So the whole calculation is a little bit tedious. But I think we should at least go through one such calculation, because this is the kind of prime example of QED calculation. So each of us should at least see once in our life, OK? So we will do it explicitly here. AUDIENCE: The final line needs metric. PROFESSOR: The final line is a metric. You are right, but I can do that, just put the gamma mu here. AUDIENCE: Do we to keep the i epsilon? PROFESSOR: Oh, OK. Yeah, also here, because the q square is never equal to 0, so we don't need to care about i epsilon. OK? Good. So now let's do the square. Now let's to the square. So square-- we just square this term and square this term, OK? So when we square this term-- so we also square this term. So here, we just get the e to the power 4 divided by s square. And when we square this term, we just get another-- yeah, so this is gamma nu gamma mu. So if we square it, then we get another term. So we just copy this term. Yeah, let me just copy here. ur1-- Yeah, when we square it, we get another copy of this. And so let me just write it. Yeah, sorry. So we take the complex conjugate. So when we take the complex, we need to take the modulus. So that means we need to multiply this, with its own complex conjugate. And the complex conjugate of this is given by u bar r1, then gamma nu and vr2, and then, times this one and its complex conjugate. And so we have us1 bar gamma nu vs2 and vs2 bar gamma nu us1. OK. So this is the original copy with the nu contract with this nu. And this is the complex conjugate with this nu contract with this nu, OK? Good? So this looks like a mess. OK, this look like a big mess. And it turns out actually, there's a series of beautiful tricks we can use to calculate this quantity, OK? So naively, you say, oh to calculate this thing, I have to plug-in the individual wave functions for this v and u, et cetera we worked out before. But fortunately, actually, we don't need to do it, OK? So we can actually go through a series of tricks. And then we don't have to do it. So the first trick-- you say this is a number, OK? Since this is a number, I can put a trace outside of it, because the trace of a number is just a number. OK? And now you remember the trace have this cyclic properties, OK? Trace have the cyclic property. So I can actually move this to there, OK? So I can move it. So now I can write it as e to the 4 s square. I can just move this to there so that the two come together. So we have vr2, v bar r2, gamma mu, and ur1. u bar r1 times trace. Similarly, I put this us to the other side, us1, u bar s1, gamma nu. Sorry, I missed the gamma nu. Sorry, did I-- so I put this-- oh, sorry. Oh, no. Sorry, I just missed that one. u bar, r1, gamma nu. OK. And similarly, with that, so I have u bar s 1 gamma mu. And then we have vs2, v bar s2, and gamma nu. OK? So I get that. So the reason I do that-- so now, in this form, now this is a matrix. OK, now this is-- now this is a column. This is a row. And this is a matrix. And this is a column. And this is a row. So this is a matrix times a matrix and times a matrix and times a matrix, OK? And then you take the trace and then you get the number, OK-- similarly with things since here. And then we need to sum over all the spins, OK? Remember, we need to actually-- when we sum all the spins, it means we sum over all possible with some order, r2, r1, s1, and s2, OK? So now, we can use a trick which we had when we discussed the Dirac spinors. OK, so now, recall-- now, recall for on-shell spinors means they both satisfy the equation of motion, OK? Those all the eigenspinors, OK? They satisfy the following, sum over r-- if you sum over all the r of the u, urp-- ur bar p-- that give you i, ip slash plus m, OK? And when you sum over v, you get minus i, minus ik slash plus m. OK? So now, we see we have-- if we sum over spin, then we have these combinations. Now, we can use those formulas. Now, we can use those formulas. So now we find one quarter when we sum over all the indices, r1, r2, s1, s2, the m square-- so we get e4 divided by 4 s squared. And then we just apply those formulas into here. OK, so we get trace minus ip2 slash plus m. So m is the mass of the initial particle. And I take the m prime of the mass of the final particle. So the e plus minus of mass m and b, b bar have mass m prime, OK? So then I have this, then this times, then this gamma mu. Yeah, so this is still inside this bracket. ip1 slash plus m, gamma nu. And then I have trace ik1 slash plus m prime, gamma mu, and then, minus ik2 slash plus m prime gamma nu. OK? So I just-- when I do this-- so the spin sum is crucial for this simplification. When I sum over spins, and then without using explicit expression of those u and v, we can write all of them just in terms of p slash and k slash, et cetera. Any questions on this? And now we can evaluate the trace. OK, so evaluate the trace. Now we need to use the various formula. I think some of them you have done in your pset. Is that now recall. So this is the place, all those exercises, they become useful. So become mu. So here, if you look at the structure of these terms-- so here, there's a p slash. So there's one gamma matrix here. And there's another gamma matrix here. And here, p slash is the gamma matrix here. There's another gamma matrix. So at the most, we can have the product of the 4 gamma matrices inside the trace. And also, depend on the product, we can have four, three, and two. OK? Three and two. So for this purpose, we need to delete those formulas. What are the product of the four matrices inside the integral? So this is just given by eta mu nu, eta lambda rho, plus eta mu rho-- eta mu rho eta mu lambda minus eta mu lambda eta mu rho. OK? And also if you have three gamma matrices, do you know the answer? AUDIENCE: Zero. PROFESSOR: Yes. Good. And if I have two gamma matrices, do you know what is this? AUDIENCE: 4 eta mu nu. PROFESSOR: That's right, it's 4 eta mu mu. OK. So you can just apply those formulas to here. So we'll not do the calculation explicitly. So you can just plug them in. And then, it's mechanical. You can just calculate it. And then you find 1 over 4 sum over spin m squared is equal to-- let me write down the answer. So even though this answer is slightly long, but I think it's instructive to write it down so that you see it. So when you plug all this in and do the algebras, and you find that the expression can be written as this, k1 dot p1 k2 dot p2 plus p1 dot k2 and p2 dot k1. Then minus m square k1 dot k2 minus m square p1 dot p2 m prime square. Sorry. Here it should be m prime square plus 2m square m prime square. OK. So we discussed before that the amplitude square must be a Lorentz scalar. OK, it must be a Lorentz scalar. And so you see it explicitly, here. So it only involves either m square, m prime square, or the product dot product of the initial and final momentum, OK? And you see all kinds of combination of the initial and final momentum appearing here. And now, so it's convention-- so as we mentioned before, actually, if you look at all these different dot product between initial and final momentum, actually, because of the momentum conservation-- And actually, there are only two Lorentz invariant, OK? So it would be convenient to write this or this in terms of those two Lorentz invariant, OK? So that's the two of the stu variables we defined previously. So it's often just to use stu the same time. So we just-- yeah. So now, if you write them in terms of stu, and then you find, for example-- let me just give you some example. For example, 2k1 p1. So this can be written as t minus m square minus m prime square, which is the same as 2k2 dot p2, OK? And 2k1 dot p2, then it's actually equal to 2k2 dot p1, then equal to u minus m square minus m prime square. And then, since related to the initial momentum-- then related to the s equal to s minus 2m square and minus k1 k2 equal to s minus 2m prime square. OK, So you just apply those equations in here. Then, you can write them in terms of stu variables. And then, you find that one over sum over spin m squared is equal to the following expression, 2e to the power 4 s square t minus m square m prime square square plus u minus m square minus m prime square square plus 2s. OK. So recall that s plus t plus u equal to 2. So you can eliminate one of them. So you can eliminate one of them. So this concludes the calculation of the scattering amplitude. OK, so that's the expression you get, OK? And you see that it's all expressed in terms of this stu-- can be expressed nicely in terms of these stu variables. Any questions on this? Good. So now, let's calculate the cross section. And our goal will be to calculate the total cross section, OK? And so for this purpose, let's do the center of mass frame, OK? So let's go to consider the center of mass frame. Well, this is the simplest. So remember, in the center of mass frame we-- so first we need to-- yeah, so remember the center of mass frame you get that d sigma, d omega cm is given by-- yeah. Let me check my formula. Suddenly I'm not sure what I'm looking for. Yeah, so it's given by, say, the M square divided by p cm prime divided by 64 pi square s pcm prime. So pcm prime is the momentum for the final particles. And pcm is the momentum in the center of mass frame for the initial particles. So now, we also need to find the pcm, et cetera, OK? We also need to express the m, the amplitude in terms of the center of mass square, OK? So in the center of mass frame, then we have the p1 would be, say, E, pcm. OK, the p2 will be equal to the same E, minus pcm. OK, they have the same E, because they have the same mass. Similarly, k1 would be E prime, pcm prime. And the k2 would be E prime, minus pcm prime. So now, from energy conservation, immediately concluded that E must be equal to E prime. So I can forget about E prime, because the total energy will be 2E. And the final energy will also be 2e. So e have to be equal to e prime. And then we can find-- And so E would be equal to one half square root of s, OK? So this is the-- or s is equal to 4E square, because the p1 plus p2 is just 2E. You square it, it becomes 4E square. And then we find the pcm square equal to s divided by 4 minus m square. And the pcm prime square equal to s divided by 4 minus s prime square. Any questions on this? Yes. AUDIENCE: We know that [INAUDIBLE] first component of your 4 vector, they have some [INAUDIBLE].. But how do you know that it exactly has to split [INAUDIBLE]? PROFESSOR: Sorry? AUDIENCE: How do you know that the final state particles have to have the same energy? PROFESSOR: Oh. AUDIENCE: But you know that there's some that have to be-- PROFESSOR: Yeah, because they have the same mass. AUDIENCE: I see. PROFESSOR: Yeah. Yeah, because they have momentum opposite to each other in the center of mass frame. And they have the same mass. AUDIENCE: So you can allow for your final particle to have-- final particles to have different masses and then [INAUDIBLE]. PROFESSOR: Yeah, yeah. Yeah, but in this process, we are considering the-- always the particle and the antiparticle. It's an annihilation process. We always create the particle and the antiparticle together. Other questions? OK, So you can similarly express the t. OK, you can similarly express the t and u in terms of this center of mass, the momentum, et cetera. and So you can just do that. So let me just write down the final answer. Let me just write down the final answer. You can also rewrite 1/4 sum spin, m squared equal to, in terms of the center of mass. And then you find it's equal to e to the power 4, 1 plus m squared plus m prime squared divided by E squared plus pcm squared, pcm prime squared cosine theta. OK, and the theta is the-- so you have the scattering of the initial particle with pcm. And then you have the finite particle with the pcm prime. And so this angle is theta, OK? This angle is theta. And so the theta comes from when you calculate using the-- involving t and u, OK? Involving t and u. And so now we have everything. So we have the pcm. It's all in terms of s. And then your amplitude is also expressed in terms of either s or E. Yeah, E is also the s. And then we have the theta, OK? Then we have the theta. So you just plug the whole thing into here. Just plug the whole thing into here. And then you find the total cross-section. You just do d sigma, d omega, center of mass frame, and then sine theta d theta d phi. OK, nothing here depends on phi. So the phi integral, you can just done trivially. And then you just need to do the theta integral. And then you have this. So again, let me just write down the final answer. So in the end, you get 4 pi alpha squared divided by s. So the alpha is the fine-structure constant. Yeah, so the alpha is e squared divided by 4 pi, is the fine-structure constant. And this is 4 pi alpha squared, 3s, and then 1 minus m prime squared divided by e squared, 1 minus m squared divided by e squared. And the whole thing times 1 plus m squared divided by 2e squared. And the 1 plus m prime squared divided by 2e squared. So this is the final answer. And this is the answer you can compare with experiments. So this is also you can compare with experiments. Any questions? So now, if you look at this answer, it's almost completely symmetric in terms of m and m prime except that for this ratio, the m is downstairs and m prime is upstairs. Otherwise, it's the-- yeah. So this m prime is upstairs. Of course, it's the-- do I have the-- yeah, yeah. Good. Yeah, so this factor is key. OK, you have to have such kind of factor in the upstairs because of when E, you have to be E to be greater than m prime squared in order to produce the particle. If your energy is not big enough to produce-- in order to produce a b and b bar, your energy at least-- so each E has to be large enough to be greater than m, because your 2E, total energy, must be greater than 2m prime. And so you see that when E is smaller than m prime and when E equals to m prime, then this factor becomes 0. And then the cross-section becomes 0. Any questions on this? OK, so now let's consider some specific cases. So now let's consider e plus, e minus going to muon. So the muon is the next massive lepton after the electron. So in this case, say me -- m square will be m e squared. And the m prime squared will be m mu squared. And the m mu-- so the muon mass is 207 MeV. Oh, no. No, it's the 100-- actually, I didn't code the number. So the muon mass is 207 times m, the mass of the electron. So does anybody remember the electron mass? AUDIENCE: [INAUDIBLE]? PROFESSOR: Hmm? AUDIENCE: 0.511 MeV. PROFESSOR: Yeah, that's right. Yeah, yeah, 0.5-- 1? Did you say 1? AUDIENCE: Yes. PROFESSOR: Yeah, yeah. Yeah, it's 511 KeV. Yeah, 0.511 MeV. And the muon is the 207 times the mass of the electron. So if you want to create the muon, then the initial total energy-- so the square root s, which is 2E, has to be greater than the 2 m mu. So that means that the E has to be greater than m mu. And then this is much, much larger than the electron mass. So this is about 200 times larger than m E. So that means actually that, for all purposes, that you can just neglect the electron mass. So this ratio is tiny. It's tiny. So we can treat essentially the electron as massless. AUDIENCE: [INAUDIBLE] the denominator, and we added those to the [INAUDIBLE].. PROFESSOR: Yeah, just not well defined, yeah. AUDIENCE: [INAUDIBLE] makes sense that it would. PROFESSOR: No, you don't even-- yeah, in this case, you don't even go to that-- so in the case that the-- yeah. Let me see. Let me think a little bit. Yeah. Right. So yeah, for this case, of course it's not well defined. But in the opposite case, let's imagine if you have the two very massive particles to collide to create a lighter particle. And then in that case-- yeah, actually, I don't remember the answer from my top of my head. Yeah, it's a good question. I will try to find it afterwards. Any other questions? OK, good. So in this case, so when the electron mass is much larger than-- when the muon mass is much larger than the electron mass, then we can set essentially m e equal to 0. And in this case, the formula becomes simpler. In this case, it becomes simpler. The downstairs is just equal to 1. And this factor also becomes 1. This factor also becomes 1. So then there are two interesting regimes. There are two interesting regimes. So the first is you just near the threshold. OK, just near the threshold means that the pcm prime is much, much smaller than m mu. So in this regime, then the E would be approximately just m mu. And then the s would be approximately just for m mu squared. So now, if you just look at that expression-- yeah, so let's look at that expression. Then, you find the-- yeah, then you find that the differential cross-section is equal to alpha squared over 8 m mu squared and then pcm prime divided by m mu plus higher-order corrections. OK, so you expand in pcm divided by m mu-- pcm prime divided by m mu. And the total cross-section is just given by pi alpha squared, 2 mu squared and pcm prime divided by m mu. OK, so here, you see explicitly that when you reach the threshold, the threshold corresponding to this quantity goes to 0. So when this quantity goes to 0, then you, at the cross-section, go to 0. So another regime of interest is the ultra-relativistic regime. So ultra-relativistic regime. So in this case, we can take E to be much, much both than m e and m mu, so also much greater than m mu. So of course, it's already much greater than m e. So in this case, then you can, just here-- you can just essentially take the both mass to be 0 in this case. And then you find, in this case, that d sigma, d omega cm just given by alpha squared divided by 4s-- squared theta. OK. And then you can find the total cross-section. So you can find the total cross-section given by 4 pi alpha squared divided by 3s. So now, this final formula is very simple. So you could have written down this formula without doing any calculation, of course up to the prefactor, just on a dimensional grounds. So in the ultra-relativistic regime, then essentially you can neglect both the mass of the initial particle and the final product. And then in this whole process, if the initial particle is massless and the final particle is massless, then for this whole process, the only scale is, what? The only scale is s, OK? That's the only scale. So the only scale is s. And then that means-- and this should have dimension area. And then that means that there should be 1 over s. And then alpha squared just comes from because we have two vertex from a two-level process. Good? Good. So if you draw the plot-- so often, if we draw the total cross-section times s-- so s times the total cross-section as a function of the square root of s, then the threshold is at 2 mu, 2 m mu. And then here it goes to 0, like a square root. And then you're going to, say, approach your constant. This is 4 pi alpha squared divided by 3. So you have a curve like this. So now let's consider using this process to show that actually the quarks have three colors. The quarks have three colors. So for this purpose, let's consider e plus plus e minus goes to hadrons. So this seemingly is a very complicated process. Seemingly, this is a very complicated process. It's because the-- so suppose if you have e plus, you create a quark and an antiquark pair. But you remember, the quarks they are confined, OK? So we don't observe individual quarks. It's because the QCD is strongly interacting. And so the quarks are confined inside the proton, pions, et cetera. And so in this process, even though it originally created a quark, but each individual quark and antiquark, they cannot survive on their own. So their presence will polarize the vacuum, create many other particles. So what happens is that you create-- so each of them will create lots of particles along with them. So in the end, it's a complicated product. You observe many hadrons. You observe many hadrons. And then, of course, in the detector, we don't directly observe those quarks and antiquarks. We just observe those hadrons. So then you say, how do we have any hope to be able to understand this process just by measuring this very complicated product, because this product can have protons, neutrons, pions, can also have other things? And then this is a very complicated process. But now, the key is the following. But now the key is the following. Indeed, if you look at the question of going to each individual hadron is complicated. But if you can see the so-called inclusive process, which goes to all possible hadrons in this process, then that probability is just given by the total cross-section. Remember, the total cross-section just matches the total probability to create such particles. And then when they are created, they go through a complicated process. But that process is unitary. Will not-- will be, say, all probability 1 to go to all possible final products. So if we include the cross-section going into all possible final products, that probability just captures by the total cross-section. So the total cross-section for this process, for the e plus, e minus going to q, q bar, and then just including the total cross-section, so inclusive section for all hadron products, this way, we can actually measure this process indirectly, even though you don't directly measure the quarks. So now let's calculate the total cross-section. So now let's calculate the total cross-section. And now we also don't have a choice which quark will be produced. So we need to sum over the total cross-section for e plus, e minus goes to, say, different kind of quarks. So i corresponding to different types of quarks. So each of them, we can just write it down using our formula. So the only difference-- the only difference-- so the same diagram applies. The only difference is that the quark has a different charge from the electron. So here is i e gamma mu. Here, we should use the charge for the quark. So that would be i qe gamma mu. q would be the quark of the charge compared to the charge of the electron. So then what do we get? So suppose qi is the charge for the quark qi. And then this gives you-- so this total is going to all hadrons-- to all hadrons. So this will be given by sum over the number of colors. And c is the number of colors for each quark. And then i will be the flavor. Now the i is the flavor of quark, so whether it's the u quark or d quark or charm quark, et cetera. And c is the number of colors. And then you have qi squared. And then you have sigma e plus, e minus to mu plus, mu minus. The rest is the same. So for each type of quark, this is exactly the same. You just replace iq-- this vertex by iqi. And then you get the qi squared. Good? Yes? AUDIENCE: So you talked about how when the quarks hadronize the-- polarized the vacuum. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] very small [INAUDIBLE] calculate out every possible [INAUDIBLE]?? PROFESSOR: No, that's a very complicated process. We cannot. Yeah, that's a very complicated process. It's a huge field to study that, yeah. AUDIENCE: But then how do you account for all possible outcomes outputs? You're probably accounting for all possible quarks. PROFESSOR: No, you just measure. You just measure all possible products, right? AUDIENCE: But here, you're just summing over the probability of the flavors and the [INAUDIBLE] PROFESSOR: No, that's the only thing you can produce. AUDIENCE: And then you're not worrying about the final states that happen from these quarks? PROFESSOR: No, no, no. No, as I said earlier, we only need to worry about this process. And then this process into hadrons is complicated. But this happens with probability 1, OK? And so if we calculate this probability, then we know the total probability to here. Yeah, yeah, yeah. Yeah, because this happens with probability 1. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, yeah, yeah. Indeed, that can happen. But this is the leading process. Yeah, this is the leading process. Yeah, so this is the leading approximation. Other questions? OK, so now you're just given by this formula because you just multiply the qi squared to the total cross-section of the e plus, e minus to muon. And then you sum all the flavors and times the number of the colors. OK, times the number of colors. OK? Because all different colors have the same charge. All different colors have the same charge. So now we can define a ratio. So this is the same for everybody. So this is just factorize. So we can define a ratio for the quantity called r-- called r. It's defined by sigma e plus, e minus goes to hadrons divided by sigma e plus, e minus goes to mu plus, mu minus. And this is just given by sum nc times sum over i flavor and the qi squared. And this i, of course you should only sum over i, which is allowed by your energy, OK? Those quarks which are too heavy will not be created by your energ-- You are not including the sum. So flavor of quarks allowed by energy, by initial energy. If the quarks are too heavy, then of course you cannot create them. So now let's list the quarks we know. The process-- I will not do the final one. So we have u quark, d quark, strange quark, charm quark, bottom quark. And then there's a top quark. The top quark is very heavy. And its process will be complicated. So let's not worry about that. So then the charge for each of them. So this is 2/3. So this is minus 1/3. This is minus 1/3. This is 2/3. This is minus 1/3. And then the mass, twice of the mass. So I'll just give you some rough number. So u would be about a few MeV. So this is a few MeV. So this, again, is a few MeV. Just a few MeV, OK? So the strange will become-- strange is heavier, twice. It's 190 MeV. And the charm, the threshold is about 3.1 GeV. And the threshold for the bottom quark is about 9.5 GeV-- about 9.5. So now let's consider the value of R as a function of energy. So then if you consider for s, smaller than the charm, say 3 GeV, in this case, we can just produce uds. So in this case, we have R equal to nc then times, say this. So there's one 2/3 and two 1/3. So we have 2/3 squared plus twice 1/3 squared-- minus 1/3 squared. So this is giving you nc times 2/3. And then the others, you cannot create. And for square root s, say between 3 GeV and the 9.5 GeV, the threshold for producing the bottom quark. Then, now you can create the charm quark. So you just add another 1/3. So then now R becomes nc times 10 divided by 9. Just add another 2/3 squared here. And then, for now, if you have square root of s greater than 9.5 GeV, then now you can create the bottom now. Then you add another 1/3 squared. So this R is equal to nc times 10 divided by 9 plus another 1/3 squared. So that gives you nc, 11 divided by 9. So now, if you plot the R as a function, say for square root s as measured by experiment, you should see a threshold at 3, another threshold around 10-- yeah, 9.5, say around 10. And then you see something like this. Yeah, I'm just drawing roughly. You create something-- something. And then you have another threshold. And then you have another threshold. So it turns out the first 9 is precisely at the 2. Yeah, I'd say precisely it's just up to some arrow bar. And the other one is about, say, a little bit more than 3. Yeah, little bit more than 3. And then your 4 is the one here. 4 is around here, yeah. Anyway, so if you compare with these numbers and you conclude that nc is actually exactly 3. Yeah, nc is 3, to be compatible with this experiment. Any questions on this? Yes? AUDIENCE: [INAUDIBLE] Does this mean that this is the dominant process for electron, positron? PROFESSOR: Yeah. Yeah, going to hadrons, yeah. Other questions? OK, good. So this is a very nice, simple-- even though this is a very simple process, actually it has very important physical implications and, in fact, quite profound physical implications. So we only have a few minutes left. So let's say a few words regarding the next topic. OK, so let's again consider this process of e plus, e minus going to mu-- going to mu. So we have then this diagram. OK? So suppose this is p1, p2, and k1 and k2. So when we draw this diagram, we take the time as going above. So these are the initial state. These are the final state. So now, if we view this diagram sideways, say if we view in this direction, if you try to think that at the time going to right direction, say if you try to view it sideways, then in this case, if you view it sideways-- OK? And now this becomes an initial state. So this is still initial state. And this is still initial state. Let me see. Oh, let me-- oh. One second. Let me see. Yeah, yeah, yeah. Right, right. Sorry, sorry. Yeah, view it sideways, and then this is the initial state. And this is the initial state. Now let me draw the arrow this way so that because the-- yeah, it doesn't matter. Let's draw the arrow this way so that this is the mu, this is the mu plus. And now let's view this way. And now this is-- look how you have initial electron. And then you have initial muon, OK? And then this becomes a process, e minus plus mu minus going to e minus and mu minus, because this is the final particle, now going out will be the particle. And this final particle going out will be a particle. And then the final will be the electron and the muon. And then you get the process, b. So when you view it sideways, then you have a diagram like this. If I draw using our-- again, time going to up-- then this is like that. Now this is e minus, e minus, mu minus, and mu minus. So now you wonder whether there's some relation between these two processes, whether there's some relation between these processes from you just exchanging initial state to the final state. So it turns out actually there's a very nice relation between them. If you know one of them, you can try to deduce the other. It can help you deduce the other, which we are running out of time. So we will do it the next time. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_3_Why_Quantum_Field_Theory.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: Good. Yeah. So last lecture, we discussed Noether's theorem. For every symmetry, for every continuous symmetry, there's a conserved current. And then we also started talking about relativistic quantum mechanics. How we want to unify special relativity in quantum mechanics. So the most immediate idea for that is what's called the relativistic quantum mechanics. And the most immediate generalization of the Schrodinger equation-- so if you have-- so at the end of last lecture, we talked about, say, the most immediate generalization of the Schrodinger equation, which-- so if I have E equal to, say, p squared divided by 2m, and then you go to non-relativistic quantum mechanics Schrodinger equation. And now if you have E squared equal to p squared plus m squared for relativistic particle, and then you get what's called the Klein-Gordon equation. And again, this side has the implication of the wave function. So this describe-- and then this-- so as a generalization of this, then this means to describe the quantum mechanics of relativistic free particle, say, of mass m, of mass m. So here, the psi pi t, x is the wave function of a relativistic particle. Relativistic particle of mass m. And we also notice that this equation actually is the same as the simplest field theory equation. So we also talked about a simplest scalar field theory. Classical. So here is a simple-- a classic simplest scalar field theory. So this theory you can write down the action of the form. So this is the simplest theory you can write down. And then relativistic environment theory. And then the equation of motion of this-- so this is-- you view this as a classical field. And again, this has the equation of motion have exactly the same form as this equation. But now, here, phi, again, is a function of t, x. Now has a completely different physical interpretation. So here is-- this is a classical field. OK. So this is a classical field. OK. So in this case, the interpretation of the x in here, and in here, it's very different. So not only phi and the psi, the physical interpretation are different. The physical interpretation of x also here are different. Here, x is just a label. Is a label for the location in the space which we define this field. But here, the x is the eigenvalue of the position operator for this relativistic particle. And so they have very different physical interpretation. And so let me just label this equation by 1 and label this by 2 and this by 2 prime. So we also mentioned that this one has-- the interpretation of this as the wave function for relativistic-- yeah, for-- yeah, so the equation for relativistic quantum mechanics has a number of difficulties. So the first is that, as you will show in your pset 2, there's no sensible-- no sensible way to define a positive definite probability density. So if you want to interpret this as a wave equation, then you must have a wave-- then you must have a probability density, because in quantum mechanics, probability should be conserved. And the second difficulty is that there's a negative energy state because of the square. Because when you take the square root, then you get the minus sign, and then there's a negative energy state, which you cannot avoid in quantum mechanics even though classically you can just throw them away by hand. And the third thing we mentioned at the end, we said for a relativistic wave equation, you can describe a fixed number of particles. So the particle number cannot change. So this equation describe a single particle. And if you want to describe two particles, then you need to write down a separate equation for a different wave function. So this is for the two-particle wave function will be like this and et cetera. OK. But this does not really make sense in the relativistic system because we know that in the relativistic system, E equal to mc square, in any case, you have enough energy, then you should be able to create particles. And then that means the number of particles in the given process is not conserved. So if you want to use your quantum mechanics to describe a process, and then that-- you cannot have a formalism, which the number of particles is fixed, which you cannot change. And so this is actually the most fundamental difficulty, is that you cannot change the number of particles. And related to this difficulty is this interpretation. Here, we say-- now if you want to-- we're saying here, there's a fundamental asymmetry between the t and x. Also, yeah, maybe let me put it as 4, there's also fundamental-- an additional difficulty, there's a fundamental asymmetry between t and x. So here, in the wave equation, t is just a parameter, which will be describing the evolution, but the x is the eigenvalue of quantum operators-- eigenvalues of quantum operators. Say, corresponding to, say, hat. By putting a hat, we denote the corresponding a quantum operator. Yeah, so this is the eigenvalue of position operators. And this asymmetry become even more pronounced, say, if you can see the two particles. You have two x's here, but the only one t. OK. So those-- because of those fundamental difficulties-- so if you connect this i to iv, so we conclude that the relativistic quantum mechanics defined in the sense that you write down a wave equation, and for wave function, don't even-- it does not-- that's not-- cannot be the fundamental description. But yeah, but relativistic quantum mechanic just refers to this kind of wave equation. So at most, this can be approximated approximation at most. This is approximate description in situations, say, there's no particle creation or annihilation. So in case which your particle number is fixed, and then you can use this as approximation, but it cannot be a fundamental description. For example, later we will talk about the fermionic version of this wave equation. So this will describe a particle without spin. So later we will describe the analogous equation for electrons for spin half, and then that can indeed be used to describe the electron in a hydrogen atom as far as you don't create new electrons, et cetera. Anyway, so relativistic quantum mechanics can only be described as some kind of-- considered as an approximate description. But now if you want to unify special relativity and quantum mechanics together, it turns out that the right formulation is just quantum field theory. So it turns out that the quantum field theory addresses these difficulties. OK. So it turns out that the right way-- so if we want to describe quantum mechanics, say, of relativistic particles of mass m-- so we want to do here, it turns out, the proper thing to do, which is a little bit unintuitive at first sight, is to start with this field theory, which seemingly have nothing to do with relativistic particles, but to start with this classical field theory and then quantize it. It turns out, once you treat this theory as a quantum field theory, and this becomes a theory of arbitrary number of relativistic particles of mass m. And so that's the non-intuitive part, and that's one of the miracle, say, of the field theory, is that automatically gives you a formalism for treating arbitrary number of particles. And yeah. And also, in field theory-- so both t and x are parameters even though x only labels your location. So both t and x are parameters, and so you can easily to make them to be on equal ground to be compatible with special relativity. OK. Good? So any questions on this? OK. So we will see that the right framework is quantum field theory. OK. So finally, as a last motivation for quantum field theory, we quickly describe the last-- so the field theory can also arise at the limit of discrete systems. And this is the most relevant for condensed matter physics. For example-- So let's just consider, say, some-- yeah, let's consider 8.03 example. So let's imagine you have this number of particles, a number of the atoms, say, on the chain. And then they're connected by some springs between them. So this is the-- yeah, consider this to be infinite. And the spacing between them, say, is a. So atom, I fixed on some lattice points. And the lattice spacing is a. So yeah, so we can label the-- say, each particle by their position. For example, this is x0, this is x1, this is x2, et cetera. OK. And and the typical particle is xn. At the location of a n-th particle is xn. And so we can also introduce the deviation between the equilibrium position of each particle. So let's call it eta n. So now let's consider the dynamics of eta n for this theory. And so this is just a deviation of the n-th particle from its equilibrium position. So eta n 0 is equilibrium position. So now so now if you write down the Lagrangian for this system, so we can easily do, you just write T minus V. So T is the kinetic energy and V is the potential energy. So we can just write it as sum over n over all particles. And then let's assume they have the same mass, let's write mu eta n dot square. So this is a kinetic term, so the mu is the mass for each particle. And then their potential-- yeah, let's assume at each point, there is also-- yeah, let's just-- yeah, then there's a interaction because each particle are connected by the spring, and so they are a harmonic force between neighboring particles. And now let's imagine also there's a harmonic potential which trap this particle itself at each location. So this is a very simple spring and the particle problem which you encounter, say, in 8.03. Is this problem clear? OK. I assume most of you have seen this problem before. And your task in 8.03 is actually to find the local modes, say, of this system. And in 8.03, you also described that we can-- and a go to zero limit. So if the lattice spacing is very small. And if we are only interested in the behavior of the system at a very large distance, say the distance is much larger than a equals to-- much larger than a, then you can essentially treat this system as a continuum. You don't have to resolve individual particles. And so we can just-- equal to newer limit. So it can be true to the chain as a continuum of particles. And so each eta n t, you replace it by eta x, t. So x label is position. So x label is position, t describe the dynamics. So eta is the deviation at the location x, and it's depend on t. So this is the oscillator. And then sum over n in the Lagrangian, then we can replace it by integral over dx. And now you just treat this as a one-dimensional continuum. So integration over dx. But of course, here, there's a label-- a lattice spacing, so the so the infinitesimal here, the element is a. The a times the sum over n, you can replace it by dx. That is the lattice spacing. And now you can just write this Lagrangian in terms of continuum theory. Now you can write this Lagrangian in terms of continuum theory, and then let's just do it. So we can write it-- yeah, let me just write one more step. So you can write it as sum over a. We take the a factor out because the a factor out has to be changing to integration. And then you have 1/2 mu divided by a eta n squared minus 1/2 lambda a. So I just slightly rewrite this Lagrangian so that it's easy to take the continuum limit. So we have taken the factor of a out, but for this term, because this concerns the difference between the two, and we also divided by a in the downstairs, and then we need to multiply a upstairs, and then there's a in the front. And now the continuum limit, you can just replace this by an integral. And now I just-- can just write it as 1/2 mu tilde eta dot squared. So now eta n just replaced by eta x, t. And here, let me call it lambda tilde. Partial x eta square. And this term, you can just replace it by the derivative of eta. And then this has just become 1/2 sigma tilde eta square. And the mu tilde, of course, is mu divided by a. Lambda tilde is lambda times a. And sigma tilde, this is sigma divided by a. So the continuum limit is that this quantity has to be fixed. The tilde, the quantity has to be fixed. And then we have a continuum Lagrangian. And then we have a classical field theory. And this theory is essentially the same as that theory. So if you take this factor mu tilde out, if we take this vector mu tilde out-- so let me just take this factor of mu tilde out in the front, just up to a wall factor. And here is lambda tilde divided by mu tilde. We call it-- let's call it v square. And this becomes sigma tilde divided by mu tilde, let's call it m squared. So the V squared is equal to mu tilde is equal to the lambda tilde divided by mu tilde, and m squared is equal to sigma tilde squared by mu tilde. And then this is just essentially identical to that theory when V equal to 1. So when V equal to 1, become the same, is just 2, the equation 2. So we could do, of course, corresponding to relativistic case, speed of light, but in general, this describe-- can-- yeah. This can describe-- but in general, this is a non-relativistic-- in general, this can be just a long relativistic field theory for other values of V. So this is-- so even though this example is very simple, but this is actually a very general way that we can treat many condensed matter systems, which often involve a lattice, say, because the solid, you can imagine all the atoms are on the lattice, et cetera. And if you're only interested in the very macroscopic behavior, then you can treat solid as a continuum. And now you can now if you're interested in the mechanics of such a system, then the quantum field theory then naturally arises. OK. Good. Any questions on this example? Yes? STUDENT: So that, it goes to 1 is the same as lambda and mu being comparable? So what does that, I guess, physically mean? HONG LIU: Sorry? STUDENT: So the limit-- HONG LIU: Yeah, yeah, yeah, yeah. STUDENT: Like, the same lambda as the same as mu? HONG LIU: Yeah, yeah. STUDENT: So what is that physically mean? Like, the strength of-- HONG LIU: Yeah, yeah. This goes right into the case that the-- yeah, it just tells you that the relativistic limit is special. Happens at very special point. STUDENT: But I guess, why is that the relativistic-- To me, lambda is like the strength of your spring, and then-- HONG LIU: Yeah. STUDENT: --use your mass. HONG LIU: Right. STUDENT: Without that being comparable, how does that-- HONG LIU: Yeah. STUDENT: Because it's not the same thing as the relativistic-- HONG LIU: Yeah. There's not much you can read from here. Yeah, yeah. It's just like when you choose some special parameters, then you can have a relativistic limit, yeah. Other questions? Yes? STUDENT: Yeah, so you said that you could use this to treat some condensed matter problems? HONG LIU: Yeah. STUDENT: Yeah, so these are all scalars, right? HONG LIU: You can also have-- you can also-- you mean-- you can also have tensors or vectors, yeah. STUDENT: OK. HONG LIU: Yeah. STUDENT: So like what would you treat with this? Like phonons. HONG LIU: Oh, you can treat-- yeah. For example, you can treat phonons. You can also treat spins and say, for example, if you have an Ising model, just consider the lattice of spins, and then the average spin, and then you can treat it as a scalar field, and then again, you can write down the field theory. Yeah. And actually, the breakthrough of the phase transition in condensed matter physics to understand what phase transition is really about and describe the behavior of the phase transition, and precisely coincided with the development of field theory. And yeah, actually, increased our understanding of quantum field theory, yeah. Good. Other questions? OK, good. Just to summarize what we have discussed so far, all paths leads to QFT. So we have described three paths, but they are pretty general. First is that the quantum dynamics-- we often interested in quantum dynamics of some classical fields, say, such as, say, electric magnetic field or spacetime metric if you are interested in gravity, et cetera. So in this case, we already have the classical field theory, but we know the world is quantum and we want to understand what's the quantum version of it. And the second is that it unifies special relativity plus quantum mechanics. So you need the field theory to unify them. And the third way is that it's the large distance description of these grid systems. OK. So yeah, just combining all three elements together, they cover many, many areas of physics. They cover many, many areas of physics. Good. So now we can say a little bit about the plan for the whole semester. So here is the plan. So this is like just rephrase of the outline, which-- So the first thing we do in chapter 2-- so here is chapter 1. Chapter 2, we discussed the simplest field theory just to squash this equation 2. The theory of 2. 2 and 2 prime. Prime is its equation of motion. So yeah, we-- in physics, we always start with the simplest example. We always start with the simplest example. And so that is the one we will start with. So what we will see is that this describes-- that field theory describe spin is, is both free massive particles. OK, so we will see, when, we quantize that theory 2, and then we get the theory of free massless-- free spinless massive particles. OK. So you say, oh, that's a little bit boring because in this series, free-- the particle-- but free means they don't interact. The particle, they just don't interact. And then in chapter 3, we will add interactions. We will describe how to treat interactions. So we going to introduce interactions and tell you how to treat the interactions between those particles. Then in chapter 4, we go to the real physics. So the scalar fields is also real. Say, for example, it can be used to describe the Higgs boson. But the Higgs boson maybe is a little bit far from what we normally think about. So in chapter 4, we will go to something which is much closer. We'll talk about the theory of electron. So this is called the Dirac theory. So this theory describes free spin half particles. So this is a theory of electrons. OK. When we neglect these interactions. So this is the free spin half particles. And then we move on to the Maxwell theory. So this is the theory of the quantum electric and magnetic field. So when you quantize the Maxwell theory, say, without source, the vacuum Maxwell theory, and you find you get free-- again, there's no interaction-- massless spin 1 particle. The theory of massless spin 1 particle. So this is what we call the photon. So this is a quantum for electromagnetic fields. And then-- sorry. Did I-- so this should be chapter 5 now. I think I lost my count. So now go to chapter 6, we combine the 4 and the five together, combine electrons. So photon normally we denote by gamma. Combine the theory of electron and the photon together, and then plot the interactions between them. And then we get the so-called quantum electrodynamics. So this is called QED. So QED is very general. It essentially covers all the quantum phenomena-- yeah, a macroscopic phenomenon up to, say, weak interactions and strong interactions. If you don't go inside the nucleus or don't go to very high energy, I think that covers essentially most of the physics. And yeah. And then we will describe how to-- yeah, and then that will be the end of this course. So do you have any questions on this? OK. So this is a road map. Yes? STUDENT: Do these chapters correspond off of those chapters in the textbook or just chapters in the lecture notes? HONG LIU: The chapter in the lecture notes, yeah. Yeah. OK. Good. Other questions? Yes? STUDENT: I'm just curious if gluon's included in the free massless spin 1 particles? HONG LIU: Sorry? STUDENT: Gluons included in the free massless spin 1 particles HONG LIU: Yeah. Gluon is also massless spin 1, but gluon, actually, they interact with themselves. And so gluon is a different. So gluon-- to describe gluons, you have to wait for quantum field theory 2. And so the thing about the photon is that the photon don't interact with itself, but the gluons interact with itself. OK. Yeah, so essentially, we treat everything except gluons. Yeah. Other questions? Other questions? OK, good. So now we can just move to chapter 2. Now we are talking about this theory. OK. So actually, I should not erase it. So now we talk about this theory. Because this theory describe free particles, so we call it a free scalar field theory. So this is the theory we are interested in. So now we will describe how to quantize this theory. Good. So first, we will quickly go through the quantization of harmonic oscillator, which you should already have done in your Pset. And so we can do it relatively fast. So the quantization of harmonic oscillator in the Heisenberg picture. So we will see that once we understand this example in the right way, and then quantizing this field theory becomes trivial. And quantizing this field theory becomes trivial. OK, so let's start with quantum harmonic oscillator. For simplicity, I take the mass to be 1, and to take the frequency to be 1. Yeah. Yeah, let me put the frequency here, omega. OK. Let's take the mass to be 1. And so for this series-- so this is a simple harmonic oscillator, which you have seen it maybe for most of your intellectual life. And so P will be x dot. It's a momentum-- the conjugate momentum is x dot. And so the Hamiltonian is the P squared divided by 2 plus 1/2 omega squared x squared. And the equation of motion is x dot --double dot-- equal to x squared. So let's first look at this-- look at harmonic oscillator as a classical theory. So for classical theory, we know how to solve this equation. We just need to solve this equation. So classical solution. Is given by x t equal to A cosine omega t plus B sine omega t. And A and B, just some integration constant. And for convenience, I can also write it in the complex form. Right here is following equal to a exponential minus i omega t plus a star exponential i omega t. And a is some complex constant. And again, it's an integration constant, I just rewrite the integration constants slightly differently. So these are just integration constants. So now-- yeah, so this is a complete solution of the problem. So now let's go to quantum. So when we go to quantum, and then we replace this classical dynamical variable, then become the Heisenberg operator, become the quantum operator. In particular, in the Heisenberg picture, and then this operator will depend on time. And now this equation become operator equation. So now let's-- maybe I should label my equation. So now this star become an operator equation. Now star is the operator equation for x hat. So you have exactly the same equation as the classical equation, but now the interpretation is different. Now the x hat becomes the-- now the x become the operator equation. So now the solution-- now let me call this star star. So this still solves that equation. So this still solves that equation, except-- so these are just C numbers, because this is a function of t, these are C numbers. But now x becomes-- so now quantum mechanically, so these have become now, become the quantum solution. So now I have hat. So this still solves the equation. The quantum mechanic equation just becomes a hat. And so this still solves that equation, but these are C numbers. The left-hand side is the operator, and it can only be that A hat and B hat are operators. And also, a must be operators. And the star we replace it by dagger. So now, say-- now a, you just go to a hat and a star goes to a dagger. A hat dagger. OK, now these are integration constants for the operator equations. So they are just-- now they become constant operators. So they are just constant quantum operators. They're just quantum operators. So they are integration constants for your quantum operator equations. So now the solution-- so now the-- yeah, so now this is your quantum solution. So this is the form we will often use. You can also use that form, but the equivalent, but this is the form we will often use. You can also, from here, you take the derivative, you can find the P. So again, this will become an operator equation. You take the derivative of x hat-- x, and then you find p, et cetera. Yeah. So you-- p hat t, you just take the derivative. Then you can just work it out. It's very easy. So now-- so this equation just-- so we already solved the quantum problem because we find the full evolution-- full solution to the quantum operator equation. Except that we still need to impose the canonical quantization condition. So this is just equal to i. So the standard-- so if you plug in the expression for x and the t-- and the p into here, and then you find that a and a dagger, the commutator is equal to 1. So this is your familiar creation and annihilation operator for harmonic oscillator. For harmonic oscillator. And now, we can also use the a to build the Hilbert space because a are the-- yeah, because all your operator now-- because x and p-- x and the p are expressed in terms of a and a dagger. So essentially, any operator of this theory can all be expressed in terms of a and a dagger. And then you can just use a and a dagger because a and a dagger essentially-- they are fundamental building block of your full quantum theory. And then you can also use that to build the Hilbert space. So the Hilbert space is defined by the lowest state is annihilated by a, and then the higher state obtained by acting a dagger on the ground state. So this is your full theory. So this is your full theory. And so now, now you can compute anything in this theory just with those knowledge, just with those knowledge. So any questions on this regarding the harmonic oscillator? Good, OK. So let me just summarize. So this is very, very familiar, but let's summarize the rule we have been using. Summarize the steps in context of the harmonic oscillator. And then the same steps can be used to quantize the field theory. Steps of quantization. So we make it a general. So first-- so the 0 step is that the classical equation of motion becomes a quantum operator equation. Then the first step is to find the most general solution-- to find the most general solution to classical equation of motion-- yeah, just to equation of motion. And then you go to quantum, and then you just promote the integration constants in your classical solution in 1, in step 1 to constant operators-- constant quantum operators. So this gives the-- then you have the full-time evolution at the quantum level. Now you know how the quantum operator evolves. And then you impose canonical quantization conditions. So that will tell you the commutators between those integration constant operators just as we do here. And then constant operators in 2, now you know also the commutation relation between them-- among them, and then now can be used to generate the Hilbert space. OK. So this step 1 to 4 are very general. And if you can do it, and then you can essentially do it-- apply it to any system. Say one-- harmonic oscillator is one degree of freedom, you can apply it two degrees freedom, three degrees freedom. And also to field theory, infinite number degrees of freedom. And now we will apply these to field theory. Yes? STUDENT: Do you get the finite dimensional Hilbert spaces from this procedure? HONG LIU: For this procedure, you cannot, but you can get the finite dimension-- yeah, because the finite dimensional Hilbert space don't have the classical analog. So here, we start with a classical system, and then we quantize it. And the quantum system is a finite dimensional Hilbert space. They're essentially intrinsically quantum. And yeah. Like spin. Spin is an intrinsic in quantum, so yeah. Yes? STUDENT: This question is to do with spin operators, the fact that there are no finite dimensional representations for these dagger operators. Any dagger. HONG LIU: Yeah, yeah. Yeah, it's just because they don't have-- yeah-- yeah, the reason is just they don't have classical counterparts, yeah. Yes? STUDENT: It is it always true that the constant operator is sufficient to generate the entire Hilbert space? HONG LIU: Yeah, because if you think about it this way-- that's a very good question, because-- let's just look at this harmonic oscillator, and then you can try to generalize it. Because they are integration constant over the x and the p. Then any operator in your theory can all be expressed in terms of a and a dagger. And then and then your Hilbert space must be-- you must be able to generate the Hilbert space using them. Because they are the building block of your whole operator. Yeah, yeah. STUDENT: How do you get the vacuum state? Is there always a generalization of vacuum state? HONG LIU: Yeah, yeah. The vacuum state here is based on-- is coming from the energy. So once we solve x and p, and then we can write the Hamiltonian in terms of x and p, and then you just look for the lowest energy state. And then you find the lowest energy state that satisfy this equation. Yeah. And then from there, you can find other states. Yeah, yeah. Yeah, the same thing we are going to-- yeah, the same strategy we are going to use for the quantum field theory. OK. Good? OK, good. So now it become a mechanical. We can just apply this to this theory. We can just apply this to this theory. And now let me add here. So here, the canonical momentum density conjugate to phi is-- I called it pi before, is just the time derivative of phi. And the Hamiltonian density, you can find it explicitly, is pi squared plus 1/2. And then this is the classical equation of motion. OK. So we can-- now let's just solve this classical equation of motion. So this equation can be-- so this equation is easy to solve because of the translation symmetry. You can just do a Fourier transform. So we can Fourier transform. So now let's do can-- you can Fourier transform. So 2 prime can be solved using-- OK. So we can just write phi x equal to exponential minus iEt plus i k dot x. Yeah. OK. And then you can see then this just solves the-- so this is provide a basis of solutions to 2 prime. It's just a plane wave, just a plane wave. So now when you plug this into there, then you just get the dispersion relation. Each square should be m squared plus k squared. So we'll denote this as omega k squared. So omega k is defined to be just the k squared plus m squared. So E-- when you take the square root of E-- so you can take plus-minus omega k. Can we plus-minus omega k? So we normally call the solution-- we normally separate-- so for historical reasons, we normally put-- define u k x to be exponential minus i omega k t-- x. OK, now we have inserted the positive root of E. So this is normally called the positive energy solution. Even though this-- even though this name is a little bit misleading. So actually, this-- we don't define the energy actually-- yeah, later we will see, this is not really the energy of a particle. And so this is just a traditional name. This is just a traditional name. Conventional name. And then you can define the complex conjugate of k. And now you have-- then corresponding, you have a minus omega k in there. So we take this complex conjugate. And so this is called the negative energy solution. So altogether, they form a complete set of solutions. So complete set of solutions. So complete basis-- it's a complete set of-- complete basis is formed by u k and the u k star for all k. For all k. So these are-- when you-- these are the complete set of solutions to that wave equation. So that's just the-- yeah. Any questions on this? So this is just like a-- classically, this is like a wave. Just like a plane wave. Which you should also have seen in 8.03. Yeah. Good. So now we can find that-- so now we can write down the most general. So this is a basis. So these are the constant part of the exponential plus-minus i omega t here. So now we can write down the most general solutions by just putting in the integration constants. So the most general classical solutions. So you can just write phi x equal to integrate over all possible value of k because this is for all k, so we just sum over all of them. And so we so-- so this factor is just for convenience. OK. It's just a convention. You don't have to put it here, it's just a convention. And then we have a k u k plus a k star u k star. So this is just the most general set of solutions with a k and a k star as integration constants. So this is a full set of the integration constant. Good? So now when you go to quantum level, so now we can just follow the rule. We find the most general classical solution. And in the quantum level, we just promote this to be operator. You just put the hat there and change this to dagger. So now this become the basis of quantum operators. So these are the full set of constant quantum operators. And this solves your theory on-- so this solves the operator equation. And this solves the operator equation. So now the next thing is to impose the canonical commutation relation. So first, we have to-- now we have to do a little bit thinking. So so far, it's just straightforward, but now we have to do a little bit thinking. So for finite, for harmonic oscillator, or for quantum systems of a single variable, you just have x-- you just have these. You just have these. And now we need to come up with a generalization of these to a field theory. So we need to come up with a generalization of that field theory corresponding to phi x. So now let me making the time and the spatial coordinates separate. And its conjugate momentum is phi-- is pi. Conjugate momentum density. So we should do them at the same time. Remember, p is the same evolution operator, so they have to be evaluated at the same time. So it's equal time. Canonical quantization conditions always are the equal time. But x is a label of operators. So x, they don't have to be the same. So hee, it can be x; here, can be x prime. OK. So now we have to come up with a generalization of what is this quantity for field theory. So now we just need to do a little bit of guesswork. You can easily guess it. So before we do that, do you have any questions on this? Yes? STUDENT: x is already the operator in this feature? HONG LIU: Yeah, yeah. x is always-- so x is always-- here, it's always just the label of the spatial location. Yeah, it's a label for the-- yeah, yeah, it's your field theory label. Yes? STUDENT: So in this procedure, I guess you always end up with your operators as being constant in time. Is there any way that you can get them where it's like the evolution is more complex rather than just a constant operator and phase factor? HONG LIU: Yeah. So normally, if you have second-order differential equation, you always have some integration constant. Yeah, that's it. Yeah. STUDENT: So that carries forward all throughout? HONG LIU: Yeah, yeah. Yeah. Other questions? Yes? STUDENT: Should it be obvious why the classical equation of motion translates directly into the operator equation of motion? HONG LIU: Right. Yeah, yeah. So that's a very good-- that's a very good question. So that's just an extension of our usual procedure for the quantum mechanics. So the usual procedure, when you-- even just for the harmonic oscillator for single-variable system, you have this correspondence between the classical system and the quantum system. When you quantize the classical system, then the classical equation of motion become a quantum operator equation. Here, we just use the same rule because quantum field theory is just the theory of infinite number of degrees freedoms. We are not changing the rule of quantum mechanics. And so that's why we just-- again, just promote the classical equation into the operator equation. Other questions? Yes? STUDENT: One way of understanding the Heisenberg equations for the quantum mechanics is really the Poisson brackets for the classical theory. Is there something like that for field theory as well? I guess the-- HONG LIU: Yeah, yeah, yeah, there is, yeah. So classically, you can define the Poisson bracket between the classical field variables. And then quantum mechanically, it just become quantum commutators. STUDENT: Is that how we could come up with those commutation relations? HONG LIU: Yeah, yeah, you can also do that. That's right. Yeah. So one way to come to this is you first describe-- first, you need to generalize your standard Poisson bracket for finite number equals freedom to classical field theory, and then you can just generalize that to the quantum. Yeah, indeed. That's one route of doing it. OK, other questions? Good? So we can just guess the answer. The answer is very easy to guess. So remember-- so if you have a single x and a p, that's what you have. But if you have more than one particles, if you have more than one particles, say just hint. Say you have multiple particles system in quantum mechanics, and then you have x a and p a as your dynamical variable. So a equal to 1 to, say, n, say, the number of particles. And then your canonical quantization condition just become x a t p b t equal to i delta a b. And that different x a commute and different p commute given the p commute. Some reason, let me just put this out. So now this a and b are essentially just replaced by x and x prime. So x and x prime are just continuum version of those a and b. Remember that we can't emphasize the x and the x prime are the labels of your degrees of freedom. So now you can just guess-- so we must have the following scene. So from here, we must have phi t, x phi t, x prime must be 0, and then pi t, x-- so pi is the analog of p here. yeah, so those are operators. t, x prime must be 0. And then phi t, x with pi t, x prime should be something-- can only be 0, when x not equal to x prime, can only be 0 when x equal to x prime. OK, as a generalization of this. And so you can-- now you can guess, so what should this be? So-- yeah? STUDENT: So direct delta? HONG LIU: Yeah, this should be just Dirac delta. But now you can ask the question, why has to be direct delta? Maybe why should not be, say, the derivatives of Dirac delta? Say, why should not be, say, 100th derivative of Dirac delta? And that question can be addressed just by from dimensional analysis. So here, we know-- somehow this must be related to Dirac delta, and now let's decide. So now you can do the dimension-- do a little bit dimensional analysis. So if you just write down the action-- yeah, the action I have just erased. Sorry. So if you look back on the action-- let me just outline the idea because I'm sure you can do dimensional analysis yourself. So if you look at the action-- so the action is dimensionless in the natural unit we are using. So from that, you can deduce the dimension of phi to be 1 over L-- so 1 over the length. And from the fact that the pi-- where's pi? And maybe I have also erased-- is equal to phi dot, means pi should be dimension 1 over L squared because you take the derivative on time, and then there's another factor of L. Then that means, on the right-hand side here, must be something 1 over L to a cube because there's no other parameters here. Yeah, because here, there should be an i. And if it's a dimension 1 over cube, there can only be the delta function, not the 100th derivative of delta functions. So this thing should be just a delta function. So this is the convention that there should be i. And then it should be just the free delta function. And this indeed have the dimension of 1 over L cube. Good? So now, you can just plug-- so you have the expression for x, for phi. You take the time derivative of this, you get the expression for pi, and now you can just plug them into here, you can just plug them into here. And then you can find the commutation relation between those a k's. And so this is a slightly tedious calculation, which is, however, a little bit fun, which, of course, I will leave you to do. So if you just plug them in, and then you can deduce that the following commutation relation between a's. So this is-- I think this is in Pset 2, but I can still change my mind. Yeah, I wanted to put in Pset 2. So you find the commutator between a and the commutator between a dagger-- yeah, so now, we will suppress the hat because if I write hat I think over and over will be too tired. So these are 0. So the commutation relation between a and 0, and a-- between-- within a and a dagger. So this gives you 2 pi cube delta function in k. So this is a three delta function k. So again, this is a straightforward generalization, so if you have multiple harmonic oscillators-- so if you have considered the multiple harmonic oscillators before, and then the a between the different harmonic oscillator because k prime are just-- here, just corresponding to-- essentially you have-- yeah. Here, this is-- essentially, you have infinite number of harmonic oscillators. And each one of them labeled by a k. So this is just like-- essentially we find-- yeah, let me just write it here. So from those commutation relation, we conclude this theory, when we quantize-- after we quantize it, become an infinite number independent harmonic oscillators, decoupled harmonic oscillators. Harmonic oscillators labeled by continuous parameter k. So k is-- yeah. k is the wave number. So for each k, there is a, and a-- so between a themselves, it's 0; between a daggers, it's 0; but a, a dagger, they not equal to 0. And so this is, again, the continuum generalization of 1. This is a continuum generalization of 1 because you have a continuous variables. Yes? STUDENT: So how about the commutation relations of different times? HONG LIU: Yeah, then you cannot say for sure. STUDENT: Like by Lorentz invariance x and t be on the same plane, shouldn't we have the same-- HONG LIU: Yeah, yeah, yeah. No, but you see the quantization condition is in quantum mechanics. Quantum mechanics, t and x are not on the same footing. You can require your action to be-- x and t to be on the same footing. Once you start quantize your theory, and then t will have a pronounced role. STUDENT: OK, so if I wanted to, I couldn't write like the commutation relation as functions of for vector x? HONG LIU: No. STUDENT: --delta x-- HONG LIU: No, no, no, no, no. The canonical commutation relation have to be imposed at equal time. Other questions? Good? So yeah. So essentially, we just get-- and now it's just trivial. So you can just build up your Hilbert space. Essentially, you just have infinite number of harmonic oscillators. You just have infinite number of harmonic oscillators. And there's no surprise, you get infinite number of harmonic oscillators because we mentioned that this field theory can be actually written as a continuum limit of these particles on the chain, which, in these 8.03 three examples, you know, that's just a harmonic oscillator. Once you find the lower modes, they're all just a bunch of harmonic oscillators. And this is just a three-dimensional version of that. And now we will-- yeah, today we are running out of time. So next time, we will see that each excitations of the harmonic oscillator can be interpreted as a spacetime particle. So that's the cool thing of it. And now you have this infinite number of harmonic oscillator, and now you can act-- now you can define the vacuum, and then act this creation operators on the vacuum. And now you find each excitation actually corresponding to a particle and has the-- corresponding to a relativistic particle. And that's how you can have actually arbitrary number of particles in this theory. And yeah, because you can excite as many times as you want. Each excitation is a particle. Good, good. OK, so I think it's a good time-- yeah, we are two minutes, I think, early, but I think it's a very good place to break. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_8_Path_Integral_Formalism_for_NonRelativistic_Quantum_Mechanics.txt | [SQUEAKING][RUSTLING][CLICKING] HONG LIU: OK. Let us start. So last time, we started talking about interacting theories. And here is the simplest interacting theory with the Lagrangian density of the form. OK. So this is just the simplest one. And we will use this as our example to illustrate how you treat interacting theories, OK-- how you treat the particles with interactions. Once we understand how to do one theory, essentially you will know how to do all of them. And we also said that the-- a key quantity when talking about interactions-- a key observable-- is called S-matrix. So S-matrix is defined as the following. So S beta alpha beta-- beta alpha equal to beta U plus infinity minus infinity alpha. So you start with some alpha initial state, which are widely separated, which you can treat as a collections of free particles. And then say as T goes to minus infinity, they are very widely separated. But then as time evolves, they will come together, the scatter. Then they will separate into far future. And so that's our final state beta. And so S alpha beta is the transition amplitude between the two. OK? So this is the written in terms of the Schrodinger picture. You can also written in terms of the Heisenberg picture, which is normally written in this form. OK. So-- and we often-- so it's often convenient to write this S alpha beta separated into identity part delta beta alpha and then plus i T beta alpha. So this will capture the effect of the interactions. We'll capture the effect of the interactions. And so-- and since the-- such as-- yeah, we always consider theories which are both spatial and the time translation invariant. So you have energy momentum conservation. And so i T alpha-- i T beta alpha will always contain a piece proportional-- we also put i here-- proportional to the moment-- the momentum conservation. OK. So p alpha-- and the p alpha is the total initial momentum. So here is full momentum, OK? So I suppress the indices. And the p beta is the total final momentum. So this part is purely kinematic. So the M alpha beta is the physical part. OK. So this is normally called the scattering amplitude. OK. So our goal is to be able to calculate the scattering amplitude. OK. So this is something-- some version of the scattering amplitude that can be measured in the experiments. So theoretically, we want to be able to calculate this quantity. So they'll be able to compare with the experiments. So we also mentioned-- I will not repeat here-- we also mentioned that this S alpha beta, S beta alpha also have various properties, which you should go back to your notes of last class to review them. So one of the key points is the LSZ theorem, which said, which I just mentioned, that M alpha beta can be obtained from a correlation function, like this. Say, for example, for that theory, OK? So the n would correspond to the total number of particles of the initial plus final states. So suppose you have 2 to 2 scattering, and then n would be equal to 4. So if you start from 2 particles, you scattering to 3 particles, then n will be 5. OK, just the total number, yeah. So later, we will discuss, in more detail, how you obtain M alpha beta from here. But here, at the moment, it's enough for us to know that the scattering amplitude can be extracted from this quantity. Oh, sorry, I should-- So here, it should be time ordered. And so let me just explain the notation here. So the omega is the vacuum of your interacting theory. And then the T is time ordering. It means that, whoever with the larger time, it will appear earlier. Whoever with a larger time will appear earlier. And so any questions on this? So our goal would be to tell you how to calculate this quantity. So once you know how to calculate this quantity, then we will be able to calculate the scattering amplitude. And then you will be able to compare with experiments. And good. So in principle, we already know how to calculate this quantity from what you learned in 8.06. So for people who have taken 8.06, here, you learned. Yeah, so we mentioned last time, this theory cannot be solved, exactly. And the only way we can treat this theory is to take lambda small and then treat this as a perturbation, treat this as a small perturbation, and then expand all your physical quantities in power series of lambda. So this procedure is called perturbation theory. And in your quantum mechanics class, you should have already learned how to do perturbation theory. For example, in 8.06, that's a big part of the 8.06 class. Anyway, so just with 8.06, in principle, you already know how to do this. Let me just outline the procedure-- a computation. So that is a correlation function. So it's called a time ordered correlation function. So what I will call this is a naive perturbation theory. So this is the perturbation theory you would do after you learned 8.06, OK, after you learned 8.06, and then you will be able to do this perturbation theory. But we will do something more clever, OK? We will do something more clever. But let me just remind you, equipped with your previous quantum mechanical knowledge, how would you treat this problem? So now let's consider, again, that theory as example. So the equation of motion gives you partial square, minus m square phi equal to, say, lambda divided by 6 phi cubed. And so we can solve this equation even though we cannot solve this equation exactly. We can solve it for perturbatively in lambda. So the way we do it is the following. We write phi equal to phi 0, and the lambda phi 1 plus lambda squared phi 2, et cetera. So lambda is a small number. You imagine you expand phi in power series of lambda. And then you just plug in this power series into this equation, and then you equate two side with the same power of lambda. With the same power of lambda? So the 0th order, you just get our previous equation, phi 0 equal to 0. So phi 0 is just like our previous free field equation. And then at 1st order, then you see phi 1, and now you have 1 over 6 phi 0 to the cube, and then et cetera. So this equation, we know how to solve. And once you know how to solve this equation, you find a solution for phi 0, then you plug it in this equation. And now this just becomes a linear equation. Now you can solve phi 1. And similarly, you can solve phi 2, et cetera. So order by order, you can solve all of them. And all of them become linear equations. So phi 1 -- all higher orders can be solved in terms of phi 0. So once you find the phi 0, and then phi 0 can be solved in terms of phi 0. So classically, this gives you a solution. And as we said, quantum mechanically, we just treat this, treat all of this as operator equations. And then, essentially, you can just solve the phi, perturbatively, as the operator equation-- as a quantum operator. And then you solve this quantum operator equation, perturbatively. So similarly, we can solve this omega using perturbation theory. So first, we expand again. We expand this omega in power. So this is the vacuum of the interacting theory. And we expand it into a perturbative series. So the 0th order, you just get the free theory vacuum. And then you will have corrections from the lambda, here, due to you have interactions and because your Hamiltonian change. Because of the lambda, your Hamiltonian change. And so your ground state will also change. And so, now, you can use your 8.06 to work out those higher order corrections. Yeah, so now when you worked out the phi, when you worked out the omega, now, you can work out this object. So let me just call this G n. So the G n then can be reduced to a calculation free theory. And you need perturbation theory, because everything can be expressed in terms of phi 0. And here, again, you expand in terms of phi 0. And the corrections can be obtained by, say, phi 0 acting on a vacuum state, et cetera. So in the end, if you do this procedure, then you should be able to calculate this G n. Conceptually, it's not very difficult to calculate. But this procedure is actually very complicated. Yeah, I don't urge you to try it yourself. But I think you should try just to get the sense that this procedure is actually not very easy to do. So that's why, in quantum field theory, we actually need to develop a more sophisticated technique to treat this. Because that's what people did in the early days. In the early days, before systematic perturbation technique for quantum field theory was developed, that's how people did it in the past. And that was not easy. But people did it. So if you want to calculate, the 1st order, it's actually doable. But the higher orders become increasingly complicated. And quickly becomes very complicated when you go to higher orders. So eventually, people developed something more clever. So eventually, people developed something more clever. So there are two approaches. There are two more sophisticated approaches, which are, in the end, equivalent. OK, in the end, they're equivalent, but their derivation is different. So the first is you use so-called interaction picture. You use so-called interaction picture. So what do you do is the following. So you separate your Hamiltonian, total Hamiltonian, in terms of a free theory Hamiltonian and the interacting part. So the free theory Hamiltonian just comes from, say, if I write this as L0 minus that plus L I-- so this term is L I, so this part is L0. And so similarly, you can write your Hamiltonian-- also separate into the free theory Hamiltonian and the interacting part. And then the interacting picture-- it introduces the state into interacting picture, and you introduce the operator in the interacting picture. So the state, in the interacting picture, are related by the standard state in the Schrodinger picture. So let me just put S here. So this denotes the states in the Schrodinger picture, through this factor of H0-- basically, i H0 t. And the operators in the interacting picture are related to the operator in the Schrodinger picture, also, through this factor of the exponential of H0. Already in your quantum mechanics, you may have discussed the interaction picture. Have you seen the interaction picture before? Yeah? Yeah. So remember this evolves with the full Hamiltonian. In the Schrodinger picture, this evolves with the full Hamiltonian. And so, here, the interaction picture related this by evolution operator-- conjugate of the evolution operator for the free Hamiltonian. So this Psi I is still evolved in a complicated way. So in the interacting picture, the states still evolves in a complicated way. But the operator evolve very simply, because, in the Schrodinger picture, the operator don't evolve. And here, the evolution just controlled by the free Hamiltonian. So in the interacting picture, the states still evolve, complicatedly, but the operator evolves in a simple way. Anyway, with some manipulation, by introducing this interacting picture, you can design a more clever way to do this, to do the perturbation theory. And so I will not go through it, here. You should read the book, Peskin and Schroeder, section 4.2. And there they give a detailed discussion how you do perturbation theory using this interacting picture. So a second approach is to use the path integral. And just use path integral. So path integrals, in my own opinion, have lots of advantage over this interaction picture. So when you do interaction picture, you get the feeling, somehow, you're doing some very clever tricks. And you don't know why you want to do those tricks. And you get some nice formulas. But somehow, you feel you're doing a little bit-- yeah, you're doing some kind of magic. But the path integral treatment is automatic in the sense it's straightforward. And you don't have to think. And it just follows from the path integral. And it's much easier to generalize. And it's physically intuitive. And so that's why we will describe this approach. Because it's more physically intuitive. Sometimes, when you do too much fancy mathematics, you feel you obscure the physical picture. And yeah, this interaction picture gives you a little bit of that kind of feeling. But it's important for you to know it. We don't have time to cover both. And so we choose just to do this in class. Yes, you have a question? AUDIENCE: In the interaction picture, why do both state and the operator evolve according to H sub 0? Shouldn't the state evolve according to the interaction Hamiltonian? HONG LIU: Yeah. Yeah, just by design. AUDIENCE: Oh, OK. HONG LIU: I just defined them this way. AUDIENCE: Oh, OK. HONG LIU: I just defined them this way. Turns out the quantity defined this way is actually convenient for doing perturbation theory. So that's why I say there's a little bit of magic here. And you have to introduce this kind of unintuitive quantities, which turns out actually to be quite simple when you do perturbation theory. Yeah. OK, good. Any questions on this? Now, let's just go plug into the path integral. And now let's just plug into path integral. So path integral is an alternative way to formulate quantum mechanics. It's equivalent, say, to the Schrodinger equation or Heisenberg equation. It's equivalent to your standard way of thinking about quantum mechanics. So path integral will give you a better conceptual way for non-relativistic quantum mechanics. Path integral, most of the time, they give you a better conceptual way to think about quantum mechanics. But, in terms of calculations, it often does not offer any advantage. So that's why the textbooks, for undergraduate quantum mechanics, they don't just do path integral. Because to do the calculation, it's still much easier to solve Schrodinger equation. But conceptually, it gives you a better picture. And for quantum field theory, it actually become much more useful. Is there any questions on this? OK, good. OK. OK, so first, we are going just to review the story for the non-relativistic quantum mechanics. And once we understand this and the immediate generalization-- the field theory is immediate. Generalization to field theory is immediate. So we will just use the example of just one particle, just a particle in one dimension, just with the most familiar one. So use this as an example. Once we understand this example, then you essentially understand everything. Good. So now the key point about the path integral-- previously, in quantum mechanics, you start with the Schrodinger equation. But the path integral starts with a different object. So the object which the path integral is most convenient to treat is the propagator we discussed before. So this is the object which we defined, in the Heisenberg picture, to be this object. So this is the transition amplitude. If you have a particle-- if you have an eigenstate, with x prime and t prime and go into a position eigenstate at the location, x, at time t. So we mentioned before, if you load this object, essentially it's equivalent. You have solved the Schrodinger equation. Because from the wave equation at t prime, then by integrated with this-- convoluted with this object, then you can get the wave function at time t. And so this can also be written in terms of Schrodinger picture as this. Just start with the position eigenstate at x prime, then you evolve for t minus t prime, and then you ask the overlap with the x. Any questions on this? So let me call this object k. This is k x t, x prime, t prime, OK? If you know the wave function, so the wave function at t x can be written as a wave function at the t x prime, k t x, t prime, x prime. Yeah, I think I changed the order. So just to be consistent with the order I wrote there. t prime, psi, t prime, x prime. So if you know the wave function at t prime, and then, if I integrate this, you know the wave function. So knowing this object is equivalent to solving the system. So now we will describe a way to compute this object. We will describe a way to compute this object. So do you have any questions on this? Good. OK, so we will do this by using a trick. So imagine this is your time axis. So we start with t prime. And then we end at t, So this is the time axis. End at t. So what we will do is we divide it into segments. Divide this interval by infinite number of segments, and each segment is infinitesimal, of course. So we label this point by 0, 1, 2, etcetera. So t0 would be just be t prime. And the last point will be n, and the tn would be t. So we divided this by n intervals. So the width of the interval is delta t equal to t minus t prime divided by n. So here is delta t. And if we take n go to infinity, and then this integral goes to 0. So the reason for considering this? Just now, we can rewrite the expression, minus i H t prime in terms of many i H delta t, and then n times. Essentially, I have n times. So now what I'm going to do? Then we have this x, say, exponential minus i H delta t. So now let's write this n times, separately-- x prime. So now let's insert. So the mathematical tool that we are going to use? Insert, between each such factor, a complete set of states, a complete set of position eigenstates. So we insert at t i. Yeah because here, it's t0, then going to here will be t1, et cetera. So at t i, you insert a complete set of eigenstate. You just insert one position eigenstate. So now this object, k, then can be written as then d x 1, d x n minus 1. So at each here, you-- OK? And then you have x exponential minus i H delta t x n minus 1. And then all the way, here, into x1 exponential minus i H delta t, x prime. So the purpose for doing so is that now we can calculate each of such things, explicitly. And now we will see. So now let's consider just one of such factor. So factor, here, have the form x i plus 1 exponential minus i h delta t x i. So any factor, here, have the form for some value of i. So i start from 0 all the way to n minus 1. So now, let's compute this quantity. And now this quantity is actually computable, even though this object is very hard to compute, even though this object is very hard to compute. This object is very hard to compute. Because you have the potential there, et cetera. In general, we don't know how to compute this exponential. But now the trick is that, once you separate them into infinitesimal steps, then we can actually calculate the infinitesimal step. And once you can calculate infinitesimal step, you just add them together, then become the whole thing. So that's the basic idea. So to do this contain two elements. So the first element is to recognize-- now, remember, h, here, is the operator. So let me just to emphasize here, now, let me put a hat, here, just to emphasize here is the operator. And so if you write this explicitly, this is minus i delta t. And then you have p squared divided by 2m. Let me just also put the-- yeah, to emphasize the operators, and the V hat. So now, because the delta t is small, it goes to 0, then to leading order in delta t, so this is the first element. The first element is a point. Then to leading order in delta t, we can actually just separate the exponential. You can say this is approximately equal to exponential i delta t, p hat squared divided by 2m, and the exponential i delta t, V hat x. And the only corrections are order delta t squared. So this, you can easily see from the Campbell House-- Campbell or Baker? Yeah, anyway BCH formula. So from the BCH formula-- let me write it here. Exponential A plus B is equal to exponential A exponential B. And then exponential minus 1/2 A commutator B, then plus with more commutators. And now each A and B are proportional to delta t. So here is proportional to delta t, here proportional to delta t. And here, it involves at least the product of A and B. So here, it's all higher order in delta t squared. So this is one of the key reason we want to do this. We want to separate into individual steps. Because we can factorize them. Because, in general, we cannot factorize this guy. This guy is very complicated, because p and x don't commute. In general, we cannot factorize them. It's very complicated. For a finite t, here, we cannot factorize them. But for delta t, we can factorize them. We get this structure. So once you have this structure, and then this thing becomes easy to compute. The second thing you can show. So the second thing you can show? Now, you can show that x i plus 1-- so now you plug in this factorized form into this expression. And now you can compute this. So you have minus i delta t, p hat squared divided by 2m, minus i delta t, V x hat and x i. So those are just eigenvalues. So there's no hat. But here, there's a hat. They are operators. So here, you can now compute this object. So this becomes an elementary exercise in quantum mechanics. I will not. There's like two line calculation here. I will leave it to your homework, so you can calculate this object. Let me just write down the result. So this becomes the following object-- m divided by 2pi i delta t exponential-- so you can evaluate this thing, explicitly-- 1/2 i m, x i plus 1 minus x i delta t squared delta t, minus i delta t V x. So you just get this. So this is the answer. So this is the second element. Now this is readily computable using the standard technique. So I will remember to put it in your homework. So now we have to use a little bit of imagination. So now we have to imagine-- use a little bit of imagination. I should have drawn this line longer or bigger. Let me just draw this a little bit bigger. Sorry, I drew this line too short. So let me just draw this longer. So here is t prime. Here is t. Here, you have different locations. So I also draw the delta t to be big. So at each t i, you have x i. We insert an x i, OK? So this is t i. We insert an x i. So at t1, we insert x1, et cetera. So now you imagine, for each choice, for each value of x, in the end, we need to integrate over all possible x. So for each particular value of x1, x2, x3, x n, we can view it as a function of t evaluated at that point. Now, let's consider, you have a function, x t, so that when you evaluate it at t i, the value is x i plus x i. So if you think this way, now this thing in the exponential will become a very familiar object. So what is this object? Yes? A Lagrangian? Exactly, just become the Lagrangian. Because this is just like the time derivative, so this is just like x dot square. And this is just like V times delta t. So this is equal to-- so go from here to here. Now, this is just equal to-- with the same prefactor, just become exponential i delta t times your Lagrangian density. Yeah, just a Lagrangian, here. Just the Lagrangian x, x dot evaluated at t i. So you have the Lagrangian. So remember, the Lagrangian is just 1/2 m x dot squared minus V x. So if you take all this delta t factor and this i factor, and then this just becomes 1/2 m x dot square. And this just becomes V all evaluated at x i. And x i is the value at t i. So all evaluated at t i. So now you recognize, you just get your Lagrangian there. So now, we can write down the full t, full K. So now the full K become limit n go to infinity, because we take delta t go to 0. So we need to take the n go to infinity. So m 2 pi i delta t to the power m divided by 2. So each one gives you square root. And then you have d x1, d xn minus 1. But now the integrand becomes i delta t. Now you just sum over i from 0 to n minus 1, L, your Lagrangian, x, x prime evaluated at t i. Because now, the product in the exponential just become the sum. So now you recognize this essentially is the discrete version of the integral of L with dt. It's just over time. So now we can write. So the exponential, we can essentially-- so let me keep this integral here. So the exponential part, we can write it as exponential times i from t prime to t, say dt double prime, L, x, x dot. So this just becomes that. The integrand just becomes like that. But we still have this whole bunch of complicated stuff. Take n go to infinity-- this thing. So normally, when you have something complicated, again, there's an unfailing trick in physics to do it. What is that trick? Do you know? Give it a name? Exactly, we just rename it into something simple. So we just call the whole thing D X(t). This is just supposed to represent this limit. And so this is the condition that x t prime equal to x prime, and x t equal to t-- or x t equal to x. And the meaning of this D X(t), if you think about it, as I mentioned, each particular choice of x1 and to x n minus 1, you can think of some function of x evaluated at t i. So now, you integrate over all possible such values. You integrate over all possible such functions. So this just corresponding to integrate all possible functions x t satisfying-- so mathematically, this means satisfying that the x t prime equal to x prime, and x t equal to x. So because we always fix the two end points. You only integrate in the middle. You only integrate the middle. So now this has a very simple physical interpretation. So now this has a very simple physical interpretation. So integrate over all possible functions x, between t and t prime, with the end point fixed. Physically, this just means, suppose-- so this is the t. This is the x-axis. So suppose that the t prime is here. So here is the, say, x prime at t prime. And so this is the x t. So this is the x prime, t prime. And this point is x and t. So you integrate all possible functions between them just corresponding to-- you integrate over all paths between them. You integrate over all possible paths between them. So this is the same as integrate all paths between them. And the thing is weighted. And each path is weighted with the integrand. It's weighted by exponential i S. And the S is just your action. So this is, again, from t prime to t. Yeah, let me just write down. By this factor, OK? So the so the integration, with the Lagrangian, is just your action. And then you find that, now, you can write this propagate this transition amplitude in terms of the expression corresponding to integrate all possible paths, sum over all possible paths, and with the weight by the action, exponential action. So this started as a technical trick, because computing this thing is complicated. And then we can make mathematical progress by dividing them into small intervals. But once you do that-- but now we obtain a different way. But now, we actually obtain something conceptually new. You just say that quantum mechanics. So classically, remember, there is a fixed path from x t prime to t, just follow the equation of motion. But now we say quantum mechanically, we just sum over all possible paths. So that's the quantum uncertainty compared with the classical mechanics. So starting with that mathematical trick, but now we actually obtain something conceptually brand new. Yeah, because quantum mechanically, the difference from classical mechanics, just now we just sum over all possible paths, which is physically very intuitive. Because quantum mechanically, you can just go anywhere-- OK, this uncertainty principle. Good. Any questions on this? Yes? Are there constraints on what path is really possible? Does it have to be differentiable? Yeah, this is a good question. So to make sense, rigorously, this quantity is actually-- yeah, to make rigorous, this quantity is not easy to do. So we will not go into there. That needs to go into lots of mathematics. So normally, we don't need to worry about the precise mathematics, how to define that path. Mostly, we think this from the conceptual picture. And also, when we do technical manipulations, as we will see, often, you don't need to know the details, how you define this measure, precisely. Yeah. Yes? Sir, a question. There's no coming back in time. I mean, that's right? In principle. Yeah, there's no coming back in time, just from here to there. Yeah. There's no coming back in time. But the path can wind around as you want. Yeah, but time, you cannot come back. Other questions? OK, good. So there's also another form of this path integral. So I will not do it here. It's a slight variance of this. Let me just write it down. Again, I can put it in your homework. So there's also a Hamiltonian form, which is much less used. But sometimes this is also useful. It's a Hamiltonian form. So you can also write the k, same object as the following, as x. Again, the same boundary condition, x t prime equal to x prime and x t. So you integrate over all possible paths, as here. But you also integrate over all possible momentum, within this time range. And then the integrand becomes the Hamiltonian. You write it in terms of the Hamiltonian. So t prime, t dt double prime, then p x dot minus H. So H is your Hamiltonian. And the p, now, is just some function, some arbitrary function. And so for any choice of x and t p, you can evaluate this quantity. And then you sum all of them together. And so this alternative form, which can be easily-- again, a couple of lines, which you can show to be equivalent to this form. To show this is related to the second step, here, the intermediate step of doing this one will lead to this one. Good. Any questions? So now let me just talk about the example. So how you can use this method to calculate, say, some systems, simple systems? So as I said, this method is not very efficient for non relativistic quantum mechanics-- and so even of a different conceptual picture. And you can see from those examples. So now let's consider a simple case. So we just consider a free particle. So there's no V. So just consider a free particle. So here, S just 1/2 m x dot square. Let me just use q, since my notes use q. It doesn't matter. So let's try to compute, in this theory, the amplitude. Let's call Z t. Say at time 0, you are at q equal to 0. And then at time t, again, you add q equal to 0. So you just come back to the same point. Come back to the same point. So corresponding to this situation, if you call this direction to be t and this direction to be q, so, at some initial point, you are here. And your final point, at some t, the value of t, again, you go back to the same point in q. And then we just sum over all paths here-- but free particle. So in order to compute this guy, now, we have to work with this. Because this is only a formal notation. Remember, this is a formal notation for this object. OK, if you really want to compute this-- So this is called the path integral. So this is called the path integral. So if you want to really calculate this path integral, explicitly, then you have to go back to this form. You have to go back to this form. So the z t is equal to-- in this case, limit n go to infinity. Yeah, let me just not copy. Yeah, anyway, you can write down. You just copy this down. Just change this. Just to save time, change this x i to q. And now the integrand, you only have 1/2 m q squared. There's no return. You can just plug that in, into here. So now, in principle, you can calculate that integral. And I will leave it to your exercise to do in your homework. And you find that it will recover the standard answer. So at the end of the day, when you do this integral, this mini-integral, then you find that this gives you-- actually, I didn't write down the final answer. Let me see. So you find the final answer is a very simple one, just 2 pi i t square root. So you can do this, an infinite number of integrals, and then you get that answer. Of course, you can calculate that thing, easily, by solving the Schrodinger equation using your standard method. But this is a good exercise to do. So this is one method to calculate it. But this method is not very efficient. Because each time, if we have to go through this limit, and it will be very annoying. So now there's a more abstract method. Now, there's a more abstract method but much less mathematically rigorous. You can make it mathematically rigorous, but I'm not going to make it mathematically rigorous. But when you get used to it, it's much easier to manipulate. So now I'm going to talk about a second method to calculate this. So this is the first method. You just do honest integral. So the second method? Let me call this one for the first method. And the second method-- So in the second method, let's first look at this. Oh, yeah, sorry, this should be integrated over dt. So this is a m and q's creates the Lagrangian. Here, we're talking about the action. So here we look at the action which you appear in the path integral in that formal form then you have s is equal to 1/2 m, then you have dt from t equal to 0 to capital T. From t equal to 0 to capital T. And then we have 1/2 m q dot square. So we have that. So now I'm going to slightly rewrite this expression. So I'm going to write as following, m, from 0 to T, dt-- I will do integration by parts. Oh, sorry. So I'm already out. Then I have q minus delta t squared, q. So integration by parts, so the derivative will only act on one of the q. So I get a minus sign. And the boundary term is all 0, because the q, at the initial and the final value, are 0. Yeah, so you can check yourself. So I'm going to write this expression a little bit further. I'm going to introduce two t, t and t prime. I write it as q. Yeah, let me, now, just write it as 1/2. Let me just write it as 1/2 q t. And I write here minus m delta t minus t prime, partial t prime squared, q t prime. So these two are the same. So I introduce additional t prime, but, also, I introduce a delta function. So when you evaluate that delta function with t prime integral, you just get back to the expression above. So now I'm going to pull this into the k. Yeah, actually, sorry, I want to put this minus sign out, just for convenience. So let me just call this in k. So what I get is the minus. Yeah, it doesn't matter, actually. So sorry. Let me put it here. Yeah, it doesn't matter. So here, we can just write 1/2 dt dt prime, q t, k, t, t prime, q t prime. [AUDIENCE] Question. Yeah? [AUDIENCE] What's the integral of dt prime? Yeah. The integral of dt prime, you can just evaluate using this delta function and then this go away. And then this here, this all become t, then you just reduce to that. [AUDIENCE] So is it like an all time integral? Yeah. Yeah. No, no, no. It's the same from 0 to T. Yeah. Yeah. The same range. OK. So now I just call this into the k. So now, I think we have done this before. And now I want you to view this t and q as an index. Imagine this is just a continuous index. And then q is just like a vector-- q t prime is like a vector. q t is like a vector. And this is the same vector. And now this is just a matrix between the two vector. So now, we can view q t as a vector, with t as the index. And then the k is the matrix-- it's a matrix in the space of q t. So when I write this form, then this integral, dq t, so then we have this Z T equal to dq t. So for simplicity, let me just suppress the initial and the final just for simplicity. And then you just have i, then 1/2 i dt dt prime. Also let me just suppress the-- then you have q dot k dot q. Let me just use this simplified notation. This is the same as that guy. Yes? [AUDIENCE] Is this like ad hoc that you left the first q as a function of t and not made it a function of t prime instead? It doesn't matter. You can do that it way, too. Yeah, it won't change. So now you compare this form. So now, this look like a Gaussian integral. Now this look like a Gaussian integral. So now, let's recall your Gaussian integral, which you'll be familiar with. When you have the x exponential minus a divided by 2x square, then that gives you 2 pi divided by a. So this is a one-dimensional integral. But you can also have an n-dimensional integral. So here, the index is a sum. So this is equal to 2 pi to the power n divided by 2, square root delta A, of this matrix A. So now this is just a generalization of this integral, except the index becomes continuous. And here, the k have two indices. It's just like m, n. So here is like q m and q n. And this is just like m n. So this is just like an ordinary-- this is just like a Gaussian integral but in the space of infinite dimensional vectors. Yeah, of course, not just infinite. It's uncountable infinite dimension, because it's a continuous index. So now with this understanding, with this realization, we can just directly write down the answer. We say then the Z T must be just given by some constant divided by delta k, determinant k. Yeah, you just generalize this formula. You generalize this formula. So just now, k is the determinant in the more complicated space, in the space of functions. So the only unfortunate thing is that both C and the delta k actually are divergent. So C is our constant. And the delta K is the determinant of K in the space of functions. So this isn't like the K is the matrix. It's a matrix defined in the space of q t. q t is the space of functions. So this content can be divergent. But this doesn't matter, as we will later see, such a constant don't matter very much for your physics. And yeah, I warned you before, in quantum field theory, we will see divergences everywhere. So the key is to recognize which divergences are important and which divergence are not important. So unimportant divergence, you just forget them. Yeah, so we will later see that such a C will always cancel when you can see the physical quantities. And this will never matter. But still, we need to talk about how to think about the determinant. What do we mean by determinant here? So we do it in the same way as you would do in the matrix space. So if you have a finite dimensional matrix, there's a way to calculate the determinant, yeah, the standard way, which you learned. Say a1, a1 1, a2 2, et cetera, there's a complicated formula, OK. But that way does not generalize to such a k, with continuous indices. So we must find a way to generalize the determinant, here, in a way, which can apply to such a k, which is defined in the space of functions. So there's a lot of way we can calculate the determinant, a way without using the standard formula. So you do this as follows. Remember, the determinant of A is also the product of all the eigenvalues of A So you can just find all the eigenvalues of A. You take the product of them. And then that is the determinant of A. So this way to define determinant actually generalize. It doesn't matter how many eigenvalues you have. So now we can just find all the eigenvalues of k. Just find all the eigenvalues of k. And then we just take the product of them. That gives us the determinant. So this is the way which we can generalize. So how do we define the determinant eigenvalue? So when we define the eigenvalue of A m n, we find the eigenvector X will satisfy this behavior. So the n is summed, and then give you a this, right? So now the analog of the m n, here, is t and t prime. So the analog here is from 0 to T, t prime, k, t, t prime, some function, f n. So n, here, label different eigenvalues. So let me call it i. i t prime, you go to lambda i f t. So this would be the eigenvalue equation. Because now, the n index, here, is just replaced by t prime. The sum over n just replaced by integral. And then the rest is the same. So you just find all the eigenvalues of k the f t. And of course, it should satisfy the condition, here, the f 0 equal to f i-- yeah, f i T should satisfy the boundary condition. This is equal to 0. Because that's the integral. That's the condition of a path integral. The path integral, it's from 0 to 0. And you just solve this eigenvalue problem. This is a well-defined mathematical problem. Because k is just a differential operator. k is just some differential operator. And now, this becomes a well-defined mathematical problem. We can find these eigenvalues. And then we just take the product. So this way, we generalize that formula. So this is the exercise for you to do yourself. OK, k is just a quadratic. Differential is just delta t square. It's very easy. OK, it's very easy. So you find f i t. Yeah, actually, I think i is also not a very fortunate notation. Because it can be easily-- yeah, let me call it j, OK, so pi j t. So you can just solve this equation, easily, yourself. So let me just write down the answer. So the eigenvalue of pi j t divided by capital T. And with eigenvalues lambda j equal to m j square, pi squared, divided by t square. And the j equal to 1, 2, et cetera. So even though the k-- both index of k are continuous, but the eigenvalues are discrete, OK, a discrete infinite. The eigenvalue are discrete infinite, just labeled by integer j. You can easily convince yourself. Yeah, just solve this equation. It's easy to solve. So now we can find the determinant of k. Now we find the determinant of k. So this is given by product of n equal to 1i-- j equal to 1, to infinity, m j squared, pi squared divided by t square. So you say, what is this beast? So we have all these infinite things. We have these things. We have all these constants multiplied infinite times. And then we have this j square. And so now I claim-- again, I will not have time to do it here. I will claim this guy, once you do proper things, and then that gives you that answer. When you do that C over delta t, that can get you back to this answer. Get you back to that answer. So you will work through that in your homework problem. So this way, even though it looks hairy, but once you get used to it's much easier than doing those integrals and take the limit. It's much easier than doing this-- take the limit. And also, we will later see, most of the time, you actually don't need to calculate the determinant, explicitly. Later, we will see, most of the time, you don't need to calculate determinant, explicitly. You don't need to use those things. And just the path integral framework somehow provide a platform for you to do a lot of things. You don't actually need to evaluate it. It's actually provides a platform. That's the most important thing about the path integral. It's often not about how you evaluate it. Good. So before we conclude, let me just make some further remark. Because it's better if we just make a couple of remarks here so that we don't forget next time. yeah give me like two minutes to make some remark. So remember remarks I already said. We said that this is a new formulation of quantum mechanics, gives you a new conceptual way to think about quantum mechanics. I'll give you a new conceptual way to think about quantum mechanics. And you can show that these two formulations are equivalent. So this is equivalent. You can show that this is equivalent to the Schrodinger approach, to the Schrodinger formulation, by showing that the K, you calculate it this way. Using path integral, you find this K. You can show that the K actually satisfies the Schrodinger equation. So this will guarantee-- I already erased-- that guarantee your wave function, when you come over to this K, it will also satisfy the Schrodinger equation. So you can show that this actually satisfies the Schrodinger equation. So you can show that the K actually satisfies the Schrodinger equation. Again, I will leave it to your homework. Good. And the second point is just to highlight this contrast between the classical mechanics, which have a fixed path determined by equation of motion, and the quantum, you just sum over all possible paths weighted by your action. You just sum. Just a contrast, OK, just contrast. Yeah. Yeah, let's stop here. So the next time, then we are ready to go to field theory. So with this, if you are familiar with this, if you are familiar with this, get used to everything here, then going to field theory is just immediate. You're just changing the notation. Because, as we said, the field theory just goes on into quantum mechanics with infinite number of degrees of freedom. Once you understand 1 degree of freedom, then you can just generalize to infinite number of degrees of freedom just by changing notations. So this aspect will be the same. So we can immediately write down the path integral for quantum field theory next time. And then we can talk about the interactions. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_26_Quantum_Fluctuations_and_Renormalization.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: Let us start. So last lecture, we discussed the Compton scattering. So with that example essentially we covered all the tree-level diagrams in QED. So tree-level diagrams means those not involving loop. OK so far encountered diagrams, which are called the tree-level -- without loops. So these are called just tree-level diagrams. And so now you should be able to actually treat all possible tree-level diagrams. You already have all the tools to treat all possible tree-level diagrams. But tree-level diagrams, actually there are many important things missing just in the tree-level diagrams. So now today we will talk about how to go beyond that, so we talk about quantum fluctuations. So this is a very big subject and actually will consist of a big chunk of quantum field theory II. So here it's more just to give you a preview of the basic idea of the subject. So one feature of the tree-level diagram we have encountered. So if you want to summarize the say features of tree-level diagrams, what would you say? Other than the looks. Other than looks like tree. Yes? AUDIENCE: There's like no free momenta. HONG LIU: Exactly. So the key is that all intermediate momentum say in all propagators, all intermediate momenta, they are determined from external ones. So when you treat the tree-level diagram, there's no momentum integration. So this looks like a very simple mathematical feature, but actually it turns out there's a profound difference, both mathematically and physically. When you include the loops, there are profound differences. And also you can show that the-- So now let me just summarize. So now when including the loops, there are many new features arises. And so let me just first give you some simple examples including loops. So let's just consider say for example one is just consider the electron propagator and it's just a line. But now you can consider adding a photon line here. Adding a photon line. So this is the process and now you have a loop. It consists of this photon line and this electron line. And so you can consider this as a process that the electron propagates, and then can emit a photon, and then it absorbs itself. So this photon is purely virtual. It's purely a virtual photon. But such kind of process actually affects the propagation of an electron. So you can view these photons as something fluctuate from the vacuum. So you can imagine these come from the vacuum fluctuations. So this is one process. And another simple process, if you consider propagation of a photon. And then the photon can actually break up, say can compare create the electron and the positron pair. And then this pair can annihilate again and then to form another photon. Again, this is a correction to just a simple propagation of a single photon. So initial state is still one photon and the final state is still one photon, but something happens in between. And again, this something that happens in between, you can interpret it as some kind of vacuum fluctuation. So both electron and positron here are completely virtual. There's no real electrons you can detect. So the third simple process is this one, remember we have the electron, the photon interacting vertex. So this is i e gamma mu. So this is the electron phonon interaction vertex. And this vertex can also be corrected. So imagine you have a photon. So imagine the electron when it comes to and then first emit a photon and then get absorbed by the electron which comes out. And so this will correct the vertex, and so this is called the vertex corrections. So these two correct the propagator and this one is the corrected vertex. So these are the simplest loop diagrams. These are the simplest loop diagrams. In each of them here, there's a single loop and then there's a free momentum in the loop. So you have to integrate over the momentum in this loop because it's no longer determined by external momenta. It's no longer determined by external. So it turns out such kind of momentum integration, so if I call this momentum k, and then you were involving-- So let me first talk about the mathematical features. So mathematical new features for loop diagram. So mathematically now you need to integrate over the internal momentum. So now you have some free momentum to integrate. So this may look like something just not very-- you may say this is not a big deal, but actually this turns out to be a very big deal because this momentum, this integration, just by definition, is you integrate over all possible momentum. So each k is integrated from minus infinity to plus infinity. So in particular, this includes the region that k0 can go to infinity and the magnitude of k can also go to infinity. So we include those regions. So you have this infinite range of integrals, then there's a possibility you get divergence. There is a possibility you get divergence. And in fact, the divergence does happen and happened almost everywhere. Whenever you look at loop diagrams, you always, essentially always, find the divergences. So this leads to divergences. And such divergences are now more complicated. It's not like the simple divergences. We already considered, we've already seen where is divergent quantities, and so far those divergent quantities can be treated in a more or less simple way. So we add or subtract the infinite constant or you multiply by the infinite constant. And in the case of when you multiply by infinite constant, you can always take the ratio and then those infinite constants go away. Or you can just subtract the infinite constant, redefine your energy, redefine your charge, et cetera. But in those diagrams, the divergences are much more complicated because they come from doing some integration of some complicated expression. And so those divergences cannot be simply removed where you just add the constant, or remove a constant, or just multiply by a factor. And so they actually cannot be removed. In general, cannot be removed simply by say subtractions. Removed by simple subtractions or divisions. So in other words, the expression you get from doing such integration just don't make sense. So the results from these diagrams are not well-defined. So the result is not well-defined. So normally when we encounter divergences, we always tell a student we encounter divergences of course, one obvious reason is that you did something wrong. You just made a mistake. But if you have done everything right, when you encounter divergences, actually that's an opportunity. Because each time you see a divergence, then that means there's something you don't understand about physics. If you understand, by definition, in physics you do not have divergences, if you understand everything well. And so divergence is mathematics telling you there's some important physics you're missing. So when you look at this divergence, you can say, oh, what physics are we missing here? So the physics missing here is very simple. The physics missing here is very simple. Because these integrals, because the divergent region corresponding to those quantities go to infinity, so those quantities go to the infinity because one is you have very high energy, very large momentum. In other words, from very short distance physics. But very short, that comes from-- So divergences come from naive extrapolation. They say consider QED to arbitrarily high energy scales. Because we used the framework of QED but when we do this integration involving very large momentum. But nobody actually tells us that a QED should be valid at not very high energies because the real experiment you cannot probe very high energy process or very short distance because it's the limit of our energy resolution. So there's no reason to trust actually QED should be valid for k 0 and k to infinity. Yes? AUDIENCE: Is the divergence the result of the problem in the perturbative expansion? Or is it a problem with the original [INAUDIBLE]?? HONG LIU: Yeah, it's not a feature of the perturbative expansion. AUDIENCE: So it's a legitimate problem. HONG LIU: Yeah, it's a legitimate problem. So this divergence just comes from we extrapolate QED to arbitrary higher energy scales. So you say, OK, then what do we do? Then what do we do? So the simplest thing you can do you say, I only can measure suppose in the 1930s when people first encountered this problem. And you say, OK, and then let's put the cutoff. Here let's only include the energy regime which we can access. But again, what cutoff do you put here? So it's not clear what kind of cutoff you put there. What is the validity of this theory? So ideally you should put a cutoff there to add the boundary of the validity of the QED. But before you actually can probe the scale which QED breaks down. Which scale should we put there? So we don't know. So for many years, people thought just doing these kinds of diagrams just doesn't make physical sense because we don't know the short distance physics. We don't know the short distance physics. And indeed, this point of view was justified. So for some years people just gave up on these ideas. They just said, maybe this does not make sense. But then after the Second World War in the late 40s, a few young physicists at the time, Schwinger, Feynman in the United States and Tomonaga in Japan, they actually say, OK, even though there are these kinds of divergences, and even though we don't understand the short distance physics, let's see whether we can just use brute force, try to design some mathematical way to get rid of those divergences, and see whether I can get sensible results. And so that was their starting point, and that was their starting point. And it was a crazy starting point because of the physical reason for those divergences are very clear. But then it turned out that they actually managed to be successful. So in the late 1940s people found the method. So Feynman, Schwinger, and Tomonaga managed to find a way to remove the divergences and create a physically sensible answer from such diagrams. And also, at the beginning, their work was very complicated and very few people actually understood them. And then Feynman used Feynman diagrams, so he invented the Feynman diagrams to do this. Because previously people do those calculations. They don't use Feynman diagrams and they use very complicated perturbation theory, as you do in the long relativistic physics. It's very complicated, and actually that's what the Schwinger and the Tomonaga they were doing. So at the beginning, this thing was very complicated, but then Freeman Dyson soon afterwards, he understood all their work and actually greatly simplified the process. And actually, the current understanding essentially came from the simplification of the Freeman Dyson. And the Freeman Dyson actually operated a Feynman diagram fact -- Feynman diagram training -- essentially operated the Feynman diagram training at camp, at the institute for advanced study when he was a member. So people went through there, they all learned this Feynman diagram techniques from Freeman Dyson, and then they spread it out to all other places. That's how Feynman diagram techniques got spread out in the physical community. And anyway, so Freeman Dyson, he simplified the process, and then people eventually understood. And there is a well-defined mathematical procedure you can actually get rid of those divergences and get a physically sensible answer. And so it was a funny story. This is called the program of renormalization. The process of getting rid of the divergences. And this is a funny story. It's because, as I said, there's a very physical-- This is a very funny story for the following reason. As I mentioned, we do know where the divergences come from. Divergences come from our lack understanding of short distance physics. And normally you believe you can only cure the divergence if you understand that physics. If you don't understand that physics, how you cure that divergence? So that's what most people thought at the time, including all these quantum mechanics big shots. All these old people, like Dirac, Pauli, all those giants of quantum mechanics. They all believed you have to do something radical to QED, otherwise QED cannot be made sense. You have to invent some completely new formalism to circumvent those divergences exactly. But then Dyson later commented that those young people at the time they said we were conservative. We want you just to look at those calculations just want to make sense of those calculations. We don't think of those grand physics. I just want to make sense of those calculations, and then they succeeded. So this is an example in which very old people, they want to do grand new physics, but young people wanted to do conservative physics, just two calculations, one calculation at a time without clear physical motivation. But those young people turned out to be correct, and so they came up with this program. Yes? AUDIENCE: So [INAUDIBLE] resolve is more accurate than the tree-level diagram? And then how much more accurate? HONG LIU: No, it's not about accuracy. It's about finding a sensible way to hide the divergences. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. Because today we will not be able to go into the detail. I'm going to only tell you the result. So that's why their result was actually radical in some way. Because they realized there is some way somehow those divergences, even they are there, they do not affect those observable physics you can measure. They don't affect those things. So you can find some mathematical trick to remove them. But for many years, so after they succeeded, so this is late 1940s, and the Dyson was also late 1940s, early 1950. And then as I'm going to mention at the end of today's lecture, the first calculated quantity is this anomalous magnetic moment, which now requires doing a calculation of this one. Doing a calculation of this one and Schwinger did the first calculation. And then he calculated the correction of this to the magnetic moment of electron. And that agreed very well with experiments, so that was a triumph at the time. So the experiment verified their success. But still, many people didn't believe this actually was something sensible. For many years, people thought this is just a mathematical trick to get some answer. And so many people still, for many years people were doubting this program of renormalization. And then only much later in late 1960s or 1970s, when Ken Wilson came up with this idea called renormalization group, and then people finally understood the physical reasoning and the physical basis behind this program of renormalization. So that was a revolutionary progress made by Wilson. So all this will be covered in detail in Quantum Field Theory II. And so today let me just very quickly tell you what these three loop diagrams do. So first let me make some general remarks. So mathematically, you get divergences and then you have to find a way to get rid of the divergences. And physically, there are also some new features. New important conceptual features. So any questions on this? So the first feature, I already mentioned. We include such loops. You essentially include the quantum fluctuations of the vacuum. So all this process, so that electron pair, that photon, and that photon they all some virtual particle just come from the vacuum, just from the vacuum fluctuations. So you can actually show mathematically rigorously. That the tree-level diagrams. Any tree-level diagrams can be obtained from solving your equation of motion perturbatively. So just take your nonlinear equation of motion, take the QED, you write down the nonlinear equation of motion, and then you have this coupling, e, and then you just solve that equation perturbatively order by order. And now all tree-level diagrams can be understood from that process. Essentially just coming from solving your equation of motion. But then we know that the quantum nature comes from doing the path integral. Equation of motion captures some classical physics, but the genuine quantum physics come from doing the classic path integral, which you integrate over all possible configurations. And so that is not captured. So those features of path integral are not captured by the tree-level diagram. But you may ask, does tree-level diagram actually capture any quantum physics? Of course it does. It does capture quantum physics because all this notion of the particles, even the notion of quantizing the particles, that's already included in the quantum effect. Just the tree-level diagram does not include the fluctuations from the vacuum. Good? And so this is the first feature. So when you include the loop, you actually now finally include the quantum fluctuations. And the second feature, it's very important, is that the physical mass charge and the physical fields are not the same as that appearing in the Lagrangian. So let me elaborate what this means. So remember, when we write down the Lagrangian for QED, you have i psi bar gamma mu partial mu minus i e A mu psi minus m psi minus one half one quarter F squared. So that's Lagrangian. And then we say that this is the charge of the electron, and this is the mass of the electron, and this psi denotes the electron field. Means that when you acted on the vacuum, you can create the electron or anti electron. And similarly, A mu because A mu is massless, and when A mu acting on the vacuum, it can create the photon. Turns out when you include the loop effect, the mass we actually measure. So by physical mass, means the mass of the electron we actually measure in the lab, and the charge we actually measure in the lab. They are no longer the same as the one appearing in the Lagrangian. So the y in the Lagrangian should be just viewed as some kind of parameter. And so and the geophysical mass and the charge are functions of those parameters. Now there's a more complicated relation between those parameters and your physical mass and the charge. Also, the physical field, which actually creates the electron, is no longer directly given by this psi. It's related by psi by some relation. So this difference is also called renormalization. So here this true sense of renormalization, which is actually used for completely two different contexts. So this renormalization just means the removal of the divergences, and this renormalization means the parameters in your Lagrangian may not be the same as the physical mass and the charge you measure. So the thing you measure actually it's related to the parameters in the Lagrangian in a non-trivial way. And this renormalization is in principle not due to the divergence. It's just due to the interactions. So heuristically the physical is as follows-- just imagine you have a free electron. So if you have a free electron, and then the mass of the electron, then it is what you measure. But now if you include the interactions, and now electron can interact with the photon in a complicated way, and that interaction then can change the mass of the electron. So that's why, in general for interacting theory, what appears in your Lagrangian, they don't have to be the quantity you actually measure. Because the interactions can change how you translate those quantities into the physical quantities you measure. So this renormalization just means the renormalization changes the value of your parameters. So now let me just make this a little bit more explicit. So before I do that, do you have any questions? Yes. AUDIENCE: [INAUDIBLE] looking at QED by itself [INAUDIBLE] any other theories, wouldn't we have to assume that this parameter m corresponds to a physical mass? [INAUDIBLE] how we say [INAUDIBLE].. But you're saying that QED on its own has its way of explaining how this parameter m relates to the physical m? HONG LIU: Yeah, so when I write down this Lagrangian and then I have parameters here for the e and m, so if I ignore the interaction, then the m will be the mass of the particle you actually measure. But now when you include the interaction, then the interaction can change that mass. It's not related to the Higgs story. The Higgs explains where this m comes from. Hicks explains where this parameter comes from. So this is a renormalization on top of that. Good. So now let me just elaborate what this means. So for this purpose now let me put a subscript B here on everything just to emphasize that they are no longer-- AUDIENCE: Can I ask a question? HONG LIU: Yeah. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah, it changes the energy you need to create the particle. So now I denoted everything with B, so those are called the bare parameters. So eB is called the bare charge, and mB called the bare mass, and this psi B and A mu B is called the bare fields. And when I write this F square means that the F square constructed out of A mu B. So now let's just illustrate this thing using these three simple diagrams. So let's look at first that diagram, the propagation of the electron. The propagation of the electron. So what you need to do is you calculate this diagram and then you find the correction to the propagator. So this diagram will induce a correction to the propagator of the electron. And from this correction, then you find that the-- So let's first write down the tree-level propagator. So this is given by minus 1 i k slash minus m plus i epsilon. So this is just mB. So you can interpret the mass of the electron as the pole of this k. So when k is equal to mB and then that corresponding to the mass of the electron. And the mass of the electron corresponding to the pole of this propagator. Yes? AUDIENCE: What does it mean [INAUDIBLE]?? HONG LIU: Yeah, what I mean is just when you invert it, this becomes k squared plus m squared minus i epsilon. It just means k squared equals this. So upstairs we have k slash plus m. It just means k squared plus m square. Just essentially the mass electron corresponding to the pole of the propagator. So now when you include such corrections, then you find your propagator modified into the following thing-- Z divided by i k slash minus some other mass plus epsilon. And then now this m mass is no longer the same. So this physical mass, it's now just mB plus some correction. So this is a physical mass. So this tells you that the mass of the electron when included such diagram actually gets shifted from your bare mass. So if we calculate it and then you find the contribution of this diagram is infinite and so that's why you need to include this renormalization. And then to calculate the finite quantity of this delta m. So you find that the pole of the propagator changed. You also find actually the residue also changed. So here the residue just 1. Here the residue becomes some other constant Z. So that means that when you act psi B on the vacuum, you no longer create just the k-- here I'm just writing schematically-- you're actually creating square root of Z times k OK because of the prefactor. So remember, our previous psi B is equal to a dagger etc., when you act on the 0 you will create a particle normalized in a certain way. But now with this Z factor, and now you create a lot of the factor now with a prefactor Z. So we normally define the physical field as the one which psi, when you act on the vacuum, which just precisely created the state. And so now there's a relation between this bear field and the physical field. And now there's a relation between the bear field and physical field through this factor. So this tells you that the magnitude of the field itself also get renormalized. Any questions on this? So the similar thing is happening for the photon case. So if you consider this process, so this corresponded to a modification to the photon propagator. So this created a propagation to the photon propagator. So now you find that actually the photon mass is not changed. So even when you include such corrections, the photon is still massless. So remember, previously we say why you cannot add a mass to a photon? So remember why we cannot add a mass term to the Maxwell field? The guage invariance. So the fact that the mass cannot be generated by such correction just means that the gauge symmetry is maintained, so the photon remains massless. But you do have similar renormalization of the strength of the Maxwell field. So now the bare field is related to your physical field through some factor normally called Z3. Yes? AUDIENCE: [INAUDIBLE] HONG LIU: Yeah, this just a constant. Yeah, this is just a constant. So this is defined to be the residue of the pole of the propagator. Good? So finally, let me mention a little bit the last diagram. So this last diagram, which I will redraw it here. So remember, if you just have the original one, then here is just i eB gamma mu. And now when you add such a correction, so this is the tree-level one and then at one loop level you can add such a thing. And when you add such a thing, then you renormalize the charge. Then the charge gets changed. So now you find interestingly that the new electric charge is related. So this is the measured charge, physical charge. Through a complicated calculation you find this physical charge is related to the bare charge by the same factor, Z3, as in the case of the relation between the bare field and the renormalized field. You find that the same number that appears in the relation between the physical electron charge and then the bare charge. So this is important for the following reason. So now you observe because of these two relation, that eB A mu B is actually equal to e A mu. So the product here is the same because the square root 3 factor cancels. And this is important. Because of this, that means your covariant derivative does not change. So remember, the structure of this covariant derivative is needed for the gauge invariance. It's needed for the gauge invariance. So that this is maintained, again, is an implication that the gauge symmetry is maintained. In other words, you want to say that if we want to maintain the gauge symmetry, then eB and A mu B, they have to renormalize in the opposite way so that this is the same. So this diagram has two effects. One effect is correct the electric charge. And another effect is generate the magnetic moment. Generate an anomalous magnetic moment. So this has two effects. The first effect is to renormalize, to change your charge, and the second effect is generate an anomalous magnetic moment. So you may have learned in your quantum mechanics that e minus has a magnetic moment, which is normally written as following-- mu e equal to minus e divided by 2m g and the spin operator. Just because the electron has spin, spin half, and so it has a magnetic moment. And this spin half of course is normally written this spin operator. It's in the spin half space, then just a sigma matrix. And this g factor, so this is a g factor which relates the g, just a number. g is just a number. So normally called g factor-- Yes. AUDIENCE: [INAUDIBLE] diagrammatic expansion of the perturbation theory in powers of lambda. So you're saying you add a term and you change e with the -- HONG LIU: Yeah, the e can change itself Yeah. So you just shift the value of e right. AUDIENCE: [INAUDIBLE]. HONG LIU: Yeah, change is also small. The change is a higher order. But this is a very good question. This is a very good question because, as I mentioned, naively if you look at those corrections, say this delta m and this change between e and eB, they are all divergent. So that's why they're all very complicated. These are all divergent, so that's why it's non-trivial to find a scheme to make sense of those equations. But we'll have all those divergences flying around. Other questions? Yes? AUDIENCE: What is the difference between square root of Z and the square root of Z3? HONG LIU: Sorry? AUDIENCE: What is the difference between square root of Z and the square root of Z3? HONG LIU: Oh, they're just different numbers. So this is one number, this is some other number. Just the electron field renormalized in a different way from the photon, but somehow the charge has to be renormalized in the same way. AUDIENCE: [INAUDIBLE]. HONG LIU: Yeah, but that come out from-- It comes out from the gauge symmetry, from the requirements of gauge symmetry. Good? So this g factor, so one of the triumphs of the Dirac equation, when Dirac first wrote it down, it said the Dirac equation predicts that g equals to 2. But in reality, in the late 30s and early 40s, the experiments were accurate enough. At the beginning, people thought that g equals to 2, so that was considered to be a triumph of the Dirac equation. But then later, when the experiment became more accurate, they find that g is actually slightly greater than 2. Of course, this accuracy is later. At the beginning, in the 40s, they can already see that the g is no longer 2. There's some small corrections. So there was a big question at the time-- how do you calculate this correction? So that amounts to calculate this diagram. So that's what Schwinger did. And so he calculated this diagram, he managed to get rid of the divergence, and he managed to reproduce these small corrections. So now let me just very quickly tell you tell you where this result comes from. Why Dirac equation predicts the g equal to 2. So this actually I had at some point was thinking as a homework exercise, but never got to put it in the homework, but it's an instructive exercise. And so now let me try to explain where it comes from. So we have the Dirac equation. So now let's consider the Dirac equation. Of electron coupled to the electromagnetic field. So let's consider this equation. So the claim-- from this equation, of course, we don't see anything about the magnetic moment. We don't see anything about the magnetic moment. So the claim is that if you take this equation and go to nonrelativistic limit, if you go to nonrelativistic limit, you will get the Schrodinger equation, you get the so-called Pauli equation. So you get the Schrodinger equation, and with a spin half particle and with a magnetic moment precisely given by g equal to 2. So now let's try to see this. Do you have any questions? So this is just the Dirac equation itself, but now if you include that vertex correction, and then that will generate this additional term. So now let's look at this nonrelativistic limit. So remember, the Dirac equation, as it was originally conceived, was already an equation of the Schrodinger type. So you have the form of the i H t psi equal to H psi. So you just multiply the i gamma 0 on both sides and then remove everything away. So just this side keeps only i partial t. So through this procedure you have H is equal to m times beta plus alpha p plus eA then plus eA0. eA0 is 0 component of A and eA is the vector component of A. So the beta is just the minus gamma i gamma 0, and alpha i is equal to minus i gamma 0 gamma i, and the p goes to minus i, the standard gradient operator. So now the claim is that if I want to take the nonrelativistic limit, this equation becomes an ordinary Schrodinger equation, including the magnetic moment. Is this clear what we want to do? So now let's choose the basis for the gamma matrices. So for this purpose, we go to the nonrelativistic limit, it's convenient to use this basis, which actually I wrote down at the beginning when we talked about the Dirac equations. Again, 1 is always 2 by 2 identity matrix. So from here, you find that the beta just equals to 1 0 0 minus 1 and the alpha just becomes 0 sigma sigma 0. So to reduce-- Right now we just choose then we also need to write psi in terms of the two components. So each of phi and the chi has two components, altogether four components. So let's write psi in terms of two upper components, two lower components. And now let's plug all this into that equation. All this into this equation. And then we get partial t phi equal to m phi plus sigma dot p plus eA chi then plus eA0 psi. And then for the equation for chi, you get a similar equation, but here it's minus m chi plus sigma dot p plus eA phi plus eA0 chi. So you get these two equations. So now so far we haven't done anything. We just wrote this equation in a different form. So now let's consider the nonrelativistic limit. Now consider the nonrelativistic limit. So in the nonrelativistic limit, this p is essentially the spatial momentum if you act on the wave. And so we take it to be mv. And of course, v to be much, much smaller than 1. And then this e A0, which has a unit of energy, so it should be of order mv squared. And this e A, which should be the same order as p, and then e A should also be of the same order as mv. And now let's look at this nonrelativistic limit. So far I have not done anything. I just specified this. And now let's consider a transformation. So in the nonrelativistic limit we know that your total energy-- So remember, a typical wave function is always the time dependence, so you always have something like iEt, and in the nonrelativistic limit, E would be m plus one half mv squared, et cetera. So this is much, much larger than this term. So we can just isolate this bigger piece. So we can try to isolate this big piece, so we will make a slight transformation. So we write phi equal to exponential minus i m t capital phi. And the chi equal to i m t x or capital chi. So now this will only contain this low energy part, so those field will only contain the low energy part that already isolated the big things in here. So the time dependence here will only include the low energy. So we assume that as a t phi would be mv squared, so phi similarly this X. Similarly this X. So this is the fast oscillator part in time, this is the slow part in time. So now we plug this in to these two equations. Plugged into those equations. Then we find i partial t. i partial t phi equal to sigma dot pi x plus eA0 phi and i partial tx equal to 2 minus mx. Let me just call it x for simplicity. So I also have introduced the notation that the pi is equal to p plus eA. So now you see the difference between these two equations. So the mass terms here have opposite sign. When I isolate that piece, and then the mass term for this equation canceled, but in this equation they add together, this minus 2m. We get this minus 2m. So now let's look at the magnitude of each term in this equation. So this is like mv squared X. So this is like m times X. So this is just something phi. We don't know the relative magnitude between X and the phi, and so this is also mv squared X. We said eA0 should be of order mv squared. So these three terms, they're all related to X, and this term dominates over these two. So you can forget about these two terms in the nonrelativistic limit. So from here, if I call this equation 1 and this equation 2, and from 2 we get in the nonrelativistic limit, we just get 2mX plus sigma pi acting on phi equal to 0. So now we find that X, we can now solve X in terms of phi-- 1 over 2m sigma dot pi phi. So now we can plug this back into this equation. So this is equation 3. If we plug 3 into 1 again, then we get Schrodinger equation for phi. So X is no longer independent field, it's just expressed in terms of phi. So now we get the equation for phi. So now you see each term, they are of the same order of magnitude. So this is mv squared and this is mv. Upstairs pi of order mv. So this is m squared v squared divided by m, so this again is mv squared and eA0 is mv squared. So now everything is nonrelativistic and of the same order. So now let's look at the meaning of this term. Now let's look at the meaning of that term. So that term you can write explicitly. Sigma i pi i, sigma j pi j, but remember pi i and pi j, they are operators, so you need to keep the ordering there. So we can just write down, so we just have sigma i sigma j just gives me delta ij plus i epsilon ijk sigma k and then pi i pi j. And so the first term gives me the pi squared, and the second term I can just, using the antisymmetric in ij, I can write it as epsilon ijk sigma k and the pi i and the pi j. So I just write the commutator in terms of them. So now it's easy to calculate pi i pi j. So this is just minus partial i plus eAi minus partial j -- minus i partial j -- plus eAj. So if you calculate the commutator, you just get minus e Fij. And now if you plug this Fij into here, so epsilon ijk times Fij, what do you get? Do you remember when epsilon ijk multiplied by Fij, what do you get? You get B. So now this equation, when you plug this in here, and then this just becomes pi squared plus-- just make sure. e sigma dot B. Because Fij multiplied by the Fijk just gets Bk and Bk sigma k, just sigma dot B. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah, it's one half epsilon ijk times Fij give you. So if you use one half epsilon ijk Fij. OK, good. We are there. So now we are done, essentially. So now the Schrodinger equation, now we get the partial t phi equal to H phi. Now H is equal to p squared. So now you have pi squared. So pi squared, if I write it explicitly, is p plus eA squared divided by 2m, then plus e divided by 2m sigma dot B, come from this term. And then the last term is eA0. So now if you recognize this part of the Hamiltonian is precisely what you get when you couple Schrodinger equation to your electromagnetic fields. You just add the p plus eA for the vector potential and here you get eA0. But now here you get the additional term, so this is called the Pauli term. When the particle has spin, has magnetic moment, then you have an additional term. So this term can be interpreted. So remember, when a particle has a magnetic moment. So when a particle has a magnetic moment, then you have a piece corresponding to mu dot B. If you compare this with that term, then we conclude that the mu-- So in this case, compare that term. That term will be equal to e 2m sigma dot B. So now we conclude, from here you can read what's the mu. mu just minus e divided by 2m times 2 times sigma divided by 2. So if you compare with this, you find that the g is exactly equal to 2. You find g exactly equal to 2. And so this way, you see actually the Dirac equation contains the information the electron has magnetic moment. It has a magnetic moment. Yes. AUDIENCE: [INAUDIBLE] would we expect g equal to 2 for all fermions then? HONG LIU: Yeah, yeah, yeah. AUDIENCE: Even if we look at these perturbative calculations like the 1 plus 0.001, etc., that should apply also? HONG LIU: No, no, no. Because the correction can be different for different formulas because different formulas, they have different interactions. And so that will change this vertex differently. For example, muons and electrons will be different. Yes. AUDIENCE: [INAUDIBLE]. HONG LIU: Well, in the high speed you can no longer-- So when you have high speeds or this effect, they mix together. So relativistically, you can no longer isolate the contribution from the angular momentum from your orbital angular momentum, spin angular momentum. They all come together. So it's only in the long relativistic case you can isolate like this. So normally when we talk about magnetic moments, we always think it about in terms of nonrelativistic situation. So now just some final words. Just erased that number. So Schwinger then did this calculation. So including this diagram, Schwinger did this calculation, and then he found that there's a small correction. So normally, its called -- say if I call it -- 1 plus F. And so this is the correction. And then the correction, which Schwinger first calculated, is given by alpha just divided by 2pi. The reason it's alpha is because here there's a e here, here there's an e, so e squared is alpha. So all this complicated calculation you do, essentially you find this 1 over 2pi. Because this correction has to be proportional to alpha, and this number you do a very hard calculation, you find it's 1 over 2pi. And then this actually agreed with experiment very well. So this is first calculated by Schwinger in 1948. So this was really a theoretical triumph at that time and agreed very well with experiments. So then people then calculate to higher orders. So the next order must be proportional to alpha squared, so we can parameterize by alpha divided by 2pi squared, and then with some number a2. You do an even more complicated calculation, much, much more complicated calculation, you find that this a2, again, it's a single number. And this contains seven diagrams. So this is one diagram, but when you go to the next order, actually involving seven diagrams. And that was calculated in 1957. Again, it was a very important thing to do at that time, because just to check whether QED, because the experimental accuracy was at the level actually you can compare this number. Actually, also there's some kind of story behind this number, which at the beginning they calculated wrong, et cetera. So now you can also calculate the next one, a3. You can parameterize again by cube. And now this now includes 72 diagrams. And it took two physicists-- I won't write down their names-- it took them 25 years to calculate it. So actually they used it. So in 1996 they finally calculated this. But now if you want to say, oh, let's try to calculate a4, this includes 891 diagrams just in QED. And actually, this is also calculated using computer. And also, you can calculate this for muon. At a certain level, you also need to include the contribution from other interactions, not just QED, weak interactions, et cetera. Anyway, so the reason you want to calculate such a number to very high accuracy is because this F can be measured to very, very high accuracy experimentally. So actually, experimentally, this is one of the most accurately measured number in physics. They can measure this F to order of 10 to the -12. I think at the theoretical level, you actually cannot calculate to this level accuracy, but still you can compare to very, very good agreement. So QED actually describes nature very, very well. So I think that's all. I'm honored and very happy to have this opportunity to take you to this first part of the journey to quantum field theory, and I want to thank you all for making this journey very enjoyable and rewarding for myself, and I hope you have learned something from this class, and whatever your career path takes you, I hope you will later find this course useful. Thank you. [APPLAUSE] |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_16_Quantization_of_the_Dirac_Theory.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: So last time, we started talking about the quantization of the Dirac theory. And so, if we start with the Lagrangian density of the Dirac theory, then we can isolate the time dependent-- the part with only time directive and has the following form. And then we interpret this as the canonical momentum for psi. And then we have pi psi. And then this one we just interpret as the Hamiltonian, Hamiltonian density. And then we can just proceed to quantize the theory. When you write down the canonical quantization condition, and then we expand psi in terms of complete sets of solutions, et cetera. And then we recognize if we do it in the standard way, say to impose the commutator relation, and then the Hamiltonian actually is unbounded from below. So, actually, the total energy can be as negative as-- yeah, can be x active-- yeah, it can be arbitrarily negative. And so, that theory does not make sense. And, also, if we use the standard way to quantize it, we would get a theory which, when you exchange two particles, which they commute, which are symmetric. And so, that looks like both a bosonic theory rather than a fermionic theory. And then, there's a simple fix. So the fix is that instead of imposing the standard quantization-- standard commutation relation, we impose the following one. So we replace everywhere the commutator by anticommutator. So what we do, say, for the commutation relation, we consider psi alpha px and then pi beta px prime. So the anticommutator is equal to i delta alpha beta and the 3. We impose this condition. So which is the same if we plug in this pi just given by i psi dagger, and then that just gives you psi alpha px times psi beta dagger px prime, gives you delta alpha beta delta 3 x minus x prime. So this equation-- so one thing to notice-- so the commutator-- the anticommutator is symmetric between these two because the now is the plus rather than minus. And so, note, so this commutation relation is self-consistent. So if we call this equation one, so if one is self-consistent at x equal to x prime. So if you take these two at the same point-- sorry, so this should be prime. So if you take these two at the same point, so the left-hand side, you have psi alpha and psi beta. So you just have psi dagger. So the left-hand side is a positive definite quantity. You have psi dagger. And the right-hand side is also a positive definite quantity. And you have this delta function evaluated at 0 even though it's a divergent. And so, both sides are positive definite. So the fact that this is self-consistent, if you trace it back, has to do with the psi-- this pi is actually equal to i psi beta. So if it's equal to minus i psi-- so if pi is equal to i psi dagger. So if it is equal to minus i psi dagger, and then, of course, you will have a contradiction because the left-hand side will be positive or the right side sign will be negative because you will change the sign. And that sign then trace back to this sign. So that's the reason actually we want to have this sign here. So the sign here is for a reason. Even though, classically, when you write it down, it doesn't matter what's the-- yeah-- it was not important what the sign is. But quantum mechanically it's actually important. So this comes from-- so the-- yeah, so the sign in L important. Good. Any questions on this? Yes. AUDIENCE: Is that the [INAUDIBLE] when you replace the commutator with the anti commutator? Is that the generic feature when we describe things in harmonic series? HONG LIU: Yeah. I'm going to mention that. Yeah. I'm going to mention that. Yes. AUDIENCE: So in quantum mechanics, the fundamental commutation relation tells you that x and p don't commute. But in the classical limit, h bar goes to 0, they do. HONG LIU: Yeah. AUDIENCE: And that's a very physically intuitive idea. HONG LIU: Right. AUDIENCE: But then, here, h bar goes to 0 just tells you something about the anticommutator. So how do we determine the classical? HONG LIU: Yeah. So the classical limit, yeah, we will actually-- this is a very good question we will discuss later. The classical limit, they become anticommute. Yeah, in the classical limit, essentially they become a anticommute variables. Yeah. And this is crucial when we later describe path integral. We will discuss this in detail later. Other questions? Yes. AUDIENCE: So can you say anything about the commutation relation between them, or can you not impose it? HONG LIU: Oh, yeah. You can talk about the commutation relation between them, just you cannot impose them. AUDIENCE: But does it-- do you get any physical meaning from the commutation relation between two objects, or-- HONG LIU: Yeah. These are two operators. You can always consider whatever quantities about them. AUDIENCE: There isn't a general-- HONG LIU: Yeah, but there's no general rule. Yeah. So the general rule, we impose for the anticommutator. And so, and in the second step, we just expand the psi in terms of complete sets of modes. And so, in the second step, as we do before, we expand the psi x in terms of complete set of modes. So we have already worked out last time, times-- so we have ak, s for the u s. Then we have expansion ikx. And then, similarly, with the complex scalar convention-- so now let me call this c. Just last time I used b, but it's often conventional to use c. And I put the dagger here. It doesn't matter. It's not so important whether you put dagger here. So but it's a convention. So this is our complete sets of solutions. And then we expand psi in terms of arbitrary coefficients in terms of them. And since psi is generally complex, so these two don't have to be the same. So these two, yeah, this is a complex scalar case. They don't have to be the same. And so, quantum mechanically, then a and c, they are operators. A and c, they are operators. So if I call this equation two, and then if we plug two into one, then we can get to the commutators. We can get the anticommutators between a and c. And so, so we find that, for example, ak r, ak prime s dagger equal to ckr and ck prime s dagger, now it's equal to 2 pi cube delta 3 k minus k prime delta rs. And with all other anticommutator, so emphasize anticommutators zero. And, yeah, so the story is similar as before, except you just everywhere replace the commutator by anticommutator. And then, the vacuum, we can still define the vacuum. You can still divide-- later, we will see that this is justified. So we can still define the vacuum as annihilated by ak and ck for any k and s. So we define the vacuum. The reason this is the vacuum is because, now, if you find the Hamiltonian, so you can plug this in into your expression for the Hamiltonian. Because they just expressed in terms of psi, then you can just work it out, just straightforward as we did previously for the scalar field. Just now the algebra is slightly more complicated because you have-- yeah. So now you get omega k. And then you have s 1, 2. And then you have aks dagger aks plus cks dagger cks, then plus some constant. Just as in the scalar case, we always have some zero point energy. We always have some zero point energy. And so now you see, if you define-- so now you see the Hamiltonian is positive definite. The Hamiltonian is positive definite. And because we have a plus sign here rather than the minus sign if we do the commutation relation. And then, since this is positive definite, and then justify defining this 0 as the vacuum. Because this state, indeed, has the lowest energy given by this E0. And, yeah, E0 as in the bottom-- in the previous scalar case-- would be divergent. It would be divergent. Any questions on this? Yes. AUDIENCE: On that line, what is the coefficient of the exponential? It's minus i k x? HONG LIU: Oh, right. Sorry. Yeah. I did. Good. Thank you. Good. So let me make some comments on this E0. So, again, the quantization-- yeah, again, the quantized field, as in the scalar case, essentially consists of an infinite number of harmonic oscillators. So now I need to put the harmonic oscillator by codes. Because now these harmonic oscillators, they defined in terms of the anticommutators rather than commutators. And so, very important feature. And, normally, we call them fermionic for reasons will be clear very soon. So this is the first note-- for a reason will be very soon and the-- why we call them fermionic oscillator. And so, we have for each k and the value of the s, and you have an oscillator. And then you have oscillator generated by a and also have oscillator generated by c. So each fermionic oscillator-- so this is like the harmonic oscillator. They contribute to E0, to this ground state energy. So, remember, previously, for the both-- for the scalar case, each oscillator contributes 1/2 omega k. Remember. So now, in this case, they contribute by minus 1/2 omega k. It's actually opposite sign for the standard harmonic oscillator. In the standard harmonic oscillator, it's 1/2 omega, and here it's minus 1/2 omega. So let me just also saying why we call it a-- so a feature of this fermionic harmonic oscillator is that, say, you can act, you can create a state-- yeah. So let me just make some general remarks. Say if you have a harmonic oscillator satisfy-- yeah-- satisfy this kind of commutation relation. Let's just look at one set of them. So one side of them, just take any side with any choice of k and s, and then they satisfy that relation. So this defines fermionic harmonic oscillator. And so, interesting feature of this oscillator is that you can only excite them once. Because if you have a dagger acting on 0, now if you have a dagger square acting on 0, then this is 0. So the only state you have-- you only have two states. One is 0 and the one is a dagger 0. It's because the anticommutator of a dagger itself is 0. So this just tells you that a dagger squared is equal to 0. And, also, a squared is equal to 0. And so, you act twice. And this is precisely the Pauli principle, and the two particle on the same state, and you get identically 0. So if you act any of k here, a, k, and s, if they are the same, if you act them twice, you get 0. So if you have two particles in the same state, when k and s is the same, then it means that all their quantum numbers are the same. And so, this is the Pauli principle. So this is the Pauli principle. So no two particle can be in the same states. So here, then we conclude that a dagger, aks and cks dagger, they create fermions. They create particles which obey the Pauli principle. Good? Any questions on this? Good. So now we can define-- as before, we can define single particle states. So the single particle states, say, you just act them once. So, again, we define them by-- with this normalization as in the scalar case. And then a s aks dagger on 0. And then we define ks bar to be the one created by c. So as before, in the scalar case, we can interpret this k as the momentum. You can check explicitly, as we will describe, that the k give you the momentum of the-- yeah, goes back into the momentum eigenvalue. And then, the s can be interpreted as some kind of polarization. So yeah. And yeah, so this we call particle. So we'll call this particle. And we will call this antiparticle. And so, each particle has two polarizations. Since s equal to 1, 2. And they are all satisfied, and the k square, of course, all satisfied minus m squared. good? Any questions on this? AUDIENCE: Is s and s bar different? HONG LIU: Hmm? AUDIENCE: Is s and s bar different? HONG LIU: Is s and s bar different? Yeah, because one is created by a, and one is created by c. So we just use a bar to distinguish them. Yeah, this one corresponding to a polarization of this particle. And one bar corresponding to the polarization of the antiparticle. Good? Yeah, so you can also find the Noether charge. Say you can find the Noether charge for translation. So we already talked about momentum-- Hamiltonian. You can also find the Noether charge for spatial translation. And then you just find the P, capital P, is equal to-- yeah, so this is, yeah, you can just similarly work it out. So, actually, in your pset, you will work out the stress tensor. So you find that the loss of charge for the spatial translation is given by this. And then, if you plug it in, and then you get-- and then, again, you have this a dagger. Yeah, to save the effort, I just say you have a dagger, a and c, dagger c. So you can see, immediately, these two will be a eigenstate, say, of the momentum operator with eigenvalue k. With eigenvalue k. So that justifies that these are momentum-- these are the momentum eigenstates. So we can also find we can also define normalization. We define the normalization, yeah, for those-- you can check the normalization for those state. So these are the plane wave. So you can check just by using commutation relation. And this is given by omega k. That's 2 pi cubed. And, similarly, for the antiparticle, you have the same thing. Same thing. But if you take the overlap between the particle and the antiparticle, you always get the 0, because the commutator between a and c is always 0. Take k, r, and the k prime. S bar is always 0. They are always orthogonal. So let us make some further notes here. So since we have two components, since each particle a fermion. So we already said there should be a fermion. And it has two components. So, again, we guessed they must be spin 1/2 particle. We guess they must be spin 1/2 particle. And so, you can check explicitly. Yeah, so you can guess that this would be spin 1/2 particle. But you can check explicitly. Indeed, the eigenvalues are-- so you can construct angular momentum operator because this series, Lorentz invariance, so you can construct the Noether charge associated with the Lorentz transformation. And then you can construct the angular momentum operator. And then you check. And then, indeed, so you find that indeed the ks and k bar-- ks bar has eigenvalue of spin half particles. So this is you can check explicitly. Good. Any questions on this? And this will be in your pset. This is in your pset. And this is a little bit non-trivial calculation. But it's an instructive one. So when you see from your hand-- you see it with your own eye and from your own calculation, that this is spin 1/2 is satisfying. Good. Any questions on this? So, more generally, so we will not be able to-- we will not certainly prove here. So, more generally, you can prove there exist a so-called spin statistical theorem. So a half integer spin field-- yeah. So half integer spin fields can only be quantized using anticommutation relations-- anticommutators. Not only spin half, say 3/2, 5/2, et cetera. So Dirac field is the simplest of them. And in the anticommutation relation, and so obey this so-called the Fermi-Dirac statistics. So they obey the Pauli principle. Yes, in statistical physics, which when you exchange the wave function, you get the minus sign, it's called the Fermi-Dirac statistics. And in contrast, if you have integer spin, say like a scalar or like a photon, they have integer spin, a photon have spin one. And then you can quantize them using commutators. So for integer spin, and then quantized-- can be consistently quantized using commutators. So, in this case, you get the Bose-Einstein statistics. Means when you exchange the wave function of the particles, the wave function remains-- when you exchange the particles, the wave function remains symmetric. And so, this is very general. This is very general. Good. Any questions on this? Yes. AUDIENCE: So are there other relationships we use to quantize a field here? HONG LIU: Yeah, that's a very good question. So in 2 plus 1 dimension, very special, 2 plus 1 dimension, so there are more general statistics than Bose-Einstein-- than bosons and fermions. There are things in between called anyons, yeah. And they play a very important role in condensed matter physics, yeah. The anyon, actually, was proposed by our colleague, Frank Wilcock, in early '80s. And, yeah, at the beginning, it sounded like a fantasy. But then later actually find many important applications in condensed matter physics. Yes. AUDIENCE: What rule we've seen so far tells us that we should interpret the a and the c operators as particles and antiparticles? HONG LIU: Yeah, it's just convention. It doesn't matter. You can-- as far as they're anti each other. AUDIENCE: I guess, what indicates they are-- HONG LIU: Good. That's what we are going to talk about. Other questions? Yes. AUDIENCE: So the spin 1/2 here comes from the possible value of s, which comes from solving the classical Dirac equation? HONG LIU: Yeah, so spin 1/2, you can guess it from the two components. But for this one, you can just work out the eigenvalues. You will see, it's 1/2. Yeah. AUDIENCE: So when we generalize it to more-- like more half spin [INAUDIBLE]. HONG LIU: Yeah. AUDIENCE: We have more components? HONG LIU: Yeah, you will have more components, yeah. AUDIENCE: And we will be-- HONG LIU: Yeah, you will have more components, yeah. Other questions? Yes. AUDIENCE: You say that the energy is positive, positive definite just because you got rid of the negative sign. But how do you know that these new operators, like c is going to behave like you want it to? I guess, because we don't know the spectrum. HONG LIU: What do you mean? AUDIENCE: The eigenvalues of c [INAUDIBLE].. HONG LIU: No, we know everything about-- we know everything about them once I specifier the anticommutator, everything is fixed. Then everything is fixed, the spectrum is fixed. The energy spectrum just follow from here. So each a or c just can create the one particle. Yeah. So the spectrum is completely fixed, the energy is fixed. Yeah, we know everything about them. Yes. AUDIENCE: Can you have a theory that is spin half particles but no antiparticles? HONG LIU: Oh, yeah, that's a very good question. And so, that's the Holy Grail of neutrino physics and also in condensed matter physics or quantum computation. So people have been looking for these Majorana fermions, which we will talk about later. And people have been looking for these Majorana fermions, which is like a real scalar. It's the counterpart of the real scalar. Here it's more like a complex-- the counterparts. Yeah, here is more like a complex scalar. But you can actually-- we will talk about it. We'll be able to define something like a real fermion and then its own antiparticle. Yeah. Yes. AUDIENCE: Can you write down something that looks analogous to the Dirac equation for spin 3/2 or 5/2? HONG LIU: Yeah, you can write it down. Yeah. AUDIENCE: You can find those? HONG LIU: Yeah, I think you can write down-- I think people are by now have written down equations for any, yeah. Yeah, in principle, you can write it down for any of them. Yeah, in the end, it boils down to group theory. Yeah. Yes. AUDIENCE: I don't know if this makes sense. But the particles that are excited, the excitations of this field, will they still obey the uncertainty relation given that we've gotten rid of position momentum and commutation relations? HONG LIU: So even in the boson case, our commutation relation is not related to position and momentum. There is the field with its conjugate. There's no position operator anymore. AUDIENCE: How do you get a single particle wave function from a [INAUDIBLE] a single particle wave equation? HONG LIU: Yeah, in the non-relativistic limit, yeah, you have to take the non-relativistic limit. Yeah. Good. Other questions? Yes. AUDIENCE: In step two, how do we know that all the solutions that we listed out form a complete set of all the solutions? HONG LIU: Yeah, because we have solved the Dirac equation. Dirac equation is a linear equation. And, yeah, you solved it, then you know you have found all the solutions. Yeah, because we do a Fourier transform, convert it into an algebraic equation. The algebraic equation we know when we find all the solutions. Yeah, so those things we can know for full confidence. Good. So now let's explain in what sense we call one of-- called the particle and antiparticle, in what sense they are anti each other. So this is very similar to in the boson case when you have a complex scalar. And, remember, in the complex scalar case, the reason we call one particle to be a particle, the other to be antiparticle, is they have the same mass, the same spin, spin 0 in that case, but opposite charge. So here we can also define a charge for them. Now let's talk about charge. So remember, in your pset, when we first talk about Dirac equation, in your pset you derived that if you treat the Dirac equation as a wave equation, then you can derive a probability current. And then, the zeroth component of that current is actually positive definite. Remember? So there, when we treat the Dirac equation as a wave equation. And then we find-- from what we did with Schrodinger equation, similarly from what we did with the Schrodinger equation, and we can derive a equation like partial mu, j mu equal to 0. And the j mu-- yeah, so up to a sign. So let me just choose a minus sign here. So j mu is given by this. Using our current notation. Of course, at the time, we don't know those. Yeah, do we know the bar? I forgot. Anyway, so you have this j. You can derive this j. And the zeroth component of the j, you just psi dagger psi, which is manifestly positive as a classical function. And so, I mentioned that this was the Dirac's main motivation for writing down Dirac equation is to look for probability current, which is the positive probability. And so, this is positive definite classically. Yeah, when you treat it as a wave equation. So now we are-- but now we don't think Dirac equation as a wave equation, we treat it as a field theory. So as a field theory, and then Dirac equation-- so we have the action. So this action has obvious symmetry similar to the complex boson case. So this, you can rotate psi by a phase, by a constant phase. Then psi bar goes to minus alpha psi. And so, obviously, this is the invariant if alpha is a constant. And so, this is a u1 symmetry. So when you have a symmetry by rotation by a face, and it's a U1 symmetry. And so, from the Noether theorem, then there must be a conserved current corresponding to this phase rotation. So without doing any calculation, you should already be able to guess that this is the current. So this now becomes the Noether current. So it's actually this, if you find the Noether current, you just find this one. Just find this one. So, now, instead, we interpret J0-- rather than interpret this as a probability current and this as a probability density, if you treat it as a wave equation. And here we just interpret it as the conserved current for this U1 symmetry. And you have some conserved current. And then, this is corresponding to the zeroth component of this conserved current. And so, this is-- we interpreted as some kind of charge density as we did for the scalar case. Now this is just interpreted as the-- so now, now you can just work it out, what's this, and then the Q. So the Q defined this way. So Q is the total conserved charge, which is you integrate j0 over all space. And then just equal to that. So, naively, this is a positive definite quantity. But, actually, now if you plug in the -- the expression for psi, then what you find is the following. You find the Q have the following form. Now minus cks dagger, cks, and then plus the infinite constant. Plus the infinite constant. So, yeah, so we will define the Q, the quantum operator Q, by forgetting about this infinite constant. So now you see something very interesting happens. So, yeah, so if we set this constant, yeah, they call this-- let me just call this Q0. And the Q0 is infinite, just as you have a constant energy for the-- just have E0, which is infinite, and here is also infinite. But we will define the quantum version. But we can just define the Q just to include this part. And then, by definition-- so if I define the Q just by that part, the Q acting on the vacuum just give you 0 because I have c and a on the other side. So it means that the vacuum have 0 charge. Good. But now you see something interesting happens. So do you observe something interesting here? Yes. AUDIENCE: It should be [? posted ?] then and shown like this. But here, if we define [INAUDIBLE].. HONG LIU: That's right. Good. So naively, this quantity is infinitely-- naively, this quantity is positive definite. But if we define it in the way-- but this quantity by itself is divergent. When we say something is positive definite, if it's divergent, then it does not mean very much. So now, when we want to make it finite, the way we make it finite is we define this, so that when you act on the vacuum, so that it's 0, so that the vacuum has 0 charge. So when we throw away this infinite constant, and then, actually, this can be either positive or negative. This can be either positive or negative. And, yeah, so this is the magic by throwing away some constants, throwing away infinite constants. And you can make a positive quantity into an arbitrary sign. But this is a good thing. So now, if you look at-- when you act on the a and c, because of this sign difference, when you find that when you act on ks, then you get eigenvalue 1. When you act on ks bar, you get eigenvalue minus 1. So this has charge 1. So this has charge minus 1. So you see that this particle and antiparticle, everything else is the same. They have the same mass. They have the same spin, spin half, but they only differ charge by opposite charge. So that's why we call one of them is particle and one of them antiparticle. Yes. AUDIENCE: Can we interpret this just like the probability charge? HONG LIU: No, no, no. This is not probability, this is just charge. There's no probability interpretation anymore. Probability is only when you treat it as a wave function-- a wave equation. But we don't treat it as a wave equation. We treat it as a field theory. So in the field theory, and this is just some charge particles can carry. Yeah, you have nothing to do with probability. Yes. AUDIENCE: Is the fact that Q not is positive break any symmetry between particle and antiparticle? HONG LIU: What do you mean by-- AUDIENCE: So just right there, Q not goes to just a positive infinity. HONG LIU: Yeah. AUDIENCE: Does it break-- I don't know-- particles and antiparticles are not the same? HONG LIU: Yeah, you always have to-- because you have to choose a reference. Because the particle and antiparticle, they create something out of the vacuum. Still, even if you keep this Q0, so that means that your vacuum has a charge, which is Q0. And, again, then the particle will increase the charge by 1 and the antiparticle will decrease the charge by 1. So this aspect is the same. Yeah. So when you act this on the vacuum, you just increase charge. Yeah, just here, for convenience, we choose the vacuum to have 0 charge. Good. Any other questions? Yes. AUDIENCE: Does charge have a physical interpretation, or how do I interpret the charge in this instance? HONG LIU: Yeah, so 1 is the electron, 1 is the positron. Yeah, so these are the electric charge we observe. Yeah. So this is the electron-- so when you apply this theory to the electron, then this is just the charge of the electron. So, yeah, so this actually my next remark. So applied to electron. So Q can be interpreted up to a sign as the charge-- as the electric charge. Yeah, up to assign and a unit. So this is essentially the electric charge. So we will see later, this is the charge, which couples to the electromagnetism. Yes. AUDIENCE: So I'm a little bit confused how you can define the notion of charge without even defining the notion of force or force carrier, anything like that. HONG LIU: Yeah, the concept of charge is independent of the force. Yeah. Even though sometimes the force, you're used to this concept with a charge. But charge by itself is an independent concept. Yeah, when we talk about the Maxwell theory and then couple this thing to the Maxwell theory and then will become clearer. Yeah, but the bottom line is that the charge by-- yeah, you can define independent of the force. It's just some quantum number. Yes. AUDIENCE: Yeah, so how do you get out a value for what that term is? Right now, it's just one or minus one. HONG LIU: Right. Yeah, so indeed, so here, there's a unit. Because you can multiply that j by arbitrary constant. And then, that defines your unit for your charge. And so, in principle, yeah, so that unit have to be determined by experiments. Good? So the Dirac equation-- so it's very important, is that Dirac theory predict if you say, this is the theory of electron, so predict that electron has an antiparticle. We call it e-plus. So the e-plus have the same mass spin but just opposite charge from the electron. And, of course, when Dirac wrote down this theory, there was no e-plus yet. People only know electron. Even though Dirac had all the wrong motivation to write down this theory, try to treat it as a wave equation et cetera, but he correctly predicted somehow electron has-- this theory predict another positive charged particle. And so, that was in 1930-- that was in 19-- I think-- when he did that there was a-- first wrote this down. Maybe it was 1929 or 1930. So, at that time, you predict a new particle, that was considered to be crazy. And 20 years later, people try to predict one particle a day. [LAUGHTER] It become a fashion. But in 1929, 1930, to predict existence of new particles, people just absolutely crazy. So, yeah, I will not explain how he predicts this particle. Anyway, he tried to understand this wave equations and then predict there must be some antiparticle. But at the beginning, he was a little bit afraid. He was worried people will just call him crazy. So he said, this particle, maybe it's a proton. [LAUGHTER] Say, oh, because people knew proton at the time. He said, maybe this particle is proton. But, of course, he should know immediately himself that the proton does not have the same mass as the electron. [LAUGHTER] So this cannot be proton. So and then he quickly gave up that idea. And he said, oh, maybe this is a new particle. Maybe this is a new particle. And so, yeah. But, luckily, so in 1931 he changed his mind. He said, this is a new particle. So luckily, just in 1932, and Anderson discovered it in cosmic string-- oh, no, no, in cosmic rays. And then they found this new particle which have exactly the same mass as the electron but opposite charge in cosmic ray. So that became very happy story. It became very happy story. So any questions on this? Yes. AUDIENCE: So wouldn't any [INAUDIBLE]?? HONG LIU: Yeah, so indeed, when he first wrote it down, he treated as a wave equation. So wave equation-- wave function is complex. So he likely just treat it as a complex. He didn't even try to make it to be real. Yeah, it is very natural for it to be complex. Yeah, and then, yeah. It turns out, if you want to write the real equation, it requires a little bit more effort, which we will describe, I think, maybe next lecture. Yeah, and you can get to the real version of this. Other questions? But that takes a little bit more effort. Yeah. And that will wait until Majorana, who discovered it. Yes. AUDIENCE: So is this whole framework specific to charging spin one half particle? Because it's saying it has to be charged and it has to be spin 1/2 right? HONG LIU: Yeah, exactly. Yeah. AUDIENCE: Is there a version that could accommodate a chargeless than 1/2, or is that just a different formalism? HONG LIU: Yeah. Chargeless spin 1/2, that is the Majorana fermion, which we will talk about maybe next lecture. Yeah. AUDIENCE: But I guess-- [INTERPOSING VOICES] --build out of this framework, or is it-- HONG LIU: Yeah, it's built from this breakthrough, but with a little bit more elaboration. Other questions? Good? So now lets-- so after talk about this quantization, now we can talk about correlation functions of these Dirac fields. So let's first look at the, say, the Wightman function, which you just don't do any time ordering. So let's look at this object. I call d plus alpha beta, which is defined, again, due to the time translation symmetry, we only depend on the-- OK. Let's look at this object. Yeah, you can also exchange them. That's called d minus, so it doesn't matter. And so, yeah, so let's look at this object. So this is a 2-point correlation function between these two fermionic fields. And then, you can just plug in the expansion for each of them. Then you just work it out as we did before for the boson. And so, let me just outline one step-- one intermediate step. So when you plug them in, then you find something like this. And then you find this sum over s equal to 1, 2, and then you find u s alpha, and the u s beta bar, and exponential i k x minus y. So now, do you recognize this object? Anybody recognize it? So this is the object we discussed last lecture. But even though, yeah, but-- so this is the projector to the positive-- to the space of the positive energy solutions. So this is the projector to the space of the U solutions. So it appears here. So and I mentioned that this actually-- you can work it out. So this is given by this form. And then, yeah, so try to check in your notes of last week-- on Monday. Yeah, so now, if you plug this in, and then you get d3x, d3k, i, ik slash plus m. So this is a matrix, and you take alpha beta component. And then exponential ik x minus y. So now, so remember, in Fourier transform, any factor k, in the integrand, you can take it out in terms of a differential operator. So we can actually rewrite this as i, take this i out, and replace this by partial slash, but with derivative on x plus m alpha beta d3k 2 pi. So you can just take this outside of this Fourier transform and then replace k slash by partial derivative on x. When you take the derivative on x, you bring down a factor of k. Yeah, bring down a factor of ik. So ik can be replaced by partial x. But now, if you recognize this guy-- so do you recognize this guy? Yes? What is this? AUDIENCE: [INAUDIBLE] do a function? HONG LIU: Yeah, exactly. This is the Wightman function for a scalar field. So now we can just write it as i partial x slash plus m alpha beta and D plus x minus y. And so, this is the-- so D plus x minus y, say, is the 0 by x by dagger y of a scalar field, say, of a complex scalar. So we see that, actually, there's a very nice relation between the complex scalar Wightman function and the fermionic ones. There's a very nice relation between them. So they just differ by this factor. So, similarly, you can work out other kind of correlation function. So let me just write down the result for the others. So you can also define D minus alpha beta x minus y to be-- so you just exchange the order between them. So, in general, they don't commute. So beta y psi alpha x 0. And then, you find-- in this case, you find that given by minus i partial x slash plus m alpha beta. And the D minus-- x minus y. The D minus, again, is defined as for the scalar field, you put-- you exchange these two. And for the retarded, you can define the Dr alpha plus beta to be theta. So x minus 0 minus y 0. And now you define retarded using the anticommutator rather than the commutator we used before. So for when we define a retarded for scalar, we use the commutator. But now you use anticommutator. And then, now you find that this is given by, again, i partial x plus m, the corresponding retarded scalar 2-point function. So, finally, we can also define the time-ordered. So, finally, we can also define the time ordered Feynman function. So define to be DF alpha beta. So this is defined to be time-ordered, psi alpha x, psi beta bar y 0. So now, this time-ordering is defined as following. So, again, everything for fermionic fields you replace commutator by anticommutator. You replace commutation by anticommute. So remember, previously, when you change the order, so if the x0 is greater than y0, and then you just maintain this order. So you just have 0 psi alpha x psi bar beta y0. But now when y0 become greater than x0, you exchange the order between them. And now you add the extra minus sign for fermion. So for boson, you would just have this. But now with fermion, you add an additional minus sign. Then you can show that this is the same as, again, just plus m alpha beta, the scalar Gf. So now we can go to-- so if you want to do Feynman diagrams, exactly as we discussed before, we often need the momentum space expression. So if we go to momentum space, essentially just do a Fourier transform of x minus y. So this is a function of x minus y. Do a Fourier transform, x minus y. And then you will get dfk. So now I will suppress this alpha beta indices. And now you treat this as a matrix-- treat this as a matrix in the spinor space. And so, you can just Fourier transform this guy. So this is easy. We know how to do this Fourier transform. And this just gives us some factor of k. So essentially, you just get i, ik slash plus m. So i partial x slash just become ik slash. And then, we just plug in the expression for scalar propagator, just given by this. So this is a c number. This is a matrix. This is a matrix. And now, if you remember, again, this formula, that i k that we discussed last time, ik slash m, ik slash minus m, equal to minus k squared, slash minus m squared equal to, say, minus k squared minus m square. And so, essentially, you see that the k squared plus m square is essentially the product of these two. So this is two matrix product. And the right-hand side is the identity matrix. It is this number times the identity matrix. And so, this becomes-- essentially, you should view this as times identity matrix. And you see that, essentially, these two up to a constant, these two are inverse matrix of each other. So we can actually rewrite this as just inverse of this matrix minus ik slash plus m minus epsilon. So this ik slash minus m is the inverse matrix of this. And so, yeah, use this equation, you find them. Good? So now we conclude with the momentum space, a key expression. Is that, this one. So let me just write this in very prominent position. So this will be used over and over. So this is a matrix. And given by that. Any questions? Yes. AUDIENCE: Is it still true that the primary function is the Green's function of some wave operator for the Dirac equation? HONG LIU: Oh, you mean when the Dirac equation acting on it you get some delta function? AUDIENCE: Yeah. HONG LIU: Yeah. AUDIENCE: All right. HONG LIU: Other questions? AUDIENCE: That's not the same epsilon, right? HONG LIU: Hmm? AUDIENCE: The epsilon is really different physically? HONG LIU: No, the epsilon is the same. It's similar. It's just it's the m minus i epsilon. Yeah. Yeah, it is always, whether it's m squared minus epsilon or m minus epsilon, it's the same. Yeah. Just become-- yeah. Here, you only have one factor of m. Good? So let's conclude our discussion over the quantization of the Dirac theory. And now we go to the next topic, which you have already asked several times. But we only have a couple of minutes to motivate it and to discuss it. So the goal is chiral fermions. And [INAUDIBLE]. So, yes, we will not not have time to really start discussing. And let me just say a few remarks. So we derived before that the Dirac spinor-- so, so far, what we discussed normally is called Dirac spinor. So they transform under Lorentz transformation in the way so that the Dirac equation is covariant. So it's a natural question-- and the Dirac spinor have eight components. So I have eight real component. You have four complex component. So altogether, there are eight variables. So it's natural to ask whether you can reduce the Dirac spinor into a smaller set. So you have-- instead you have eight independent variables into a smaller set. That still can give rise to a Dirac equation, which transform covariantly under Lorentz transformation. And so, the answer is, yes. So there are two ways of doing it. And one is called the chiral fermions. And the other is called the Majorana fermions. And the chiral fermions is to have two component complex vector. So altogether you have four real components. Or you have so-called Majorana fermions, which you have four real components. So we will discuss that next time. good, that's all for today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_19_Path_Integrals_of_Fermions.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So first, I'll try to remind you what we did at the end of last lecture which was last Wednesday. And so we introduced these things called the Grassmann variables which are anticommuting. Classically, they anticommute-- so for example, if you have, say, a quantity called theta, and then theta 0-- theta squared equal to 0. And then two such kind of Grassmann variables commute with-- anticommute with each other. And then if you consider a function of such theta, then you just do a Taylor expansion, and you only have two terms. So the first term is a constant, and the second term is proportional to theta, because all higher power terms vanish. So the same thing with a multiple variable-- just expand until you reach the square of any variable. And then you can do a differentiation. So we always define the differentiation from the left. So this is given by f1. And we can also define the integration, and the integration is determined by two rules. The first rule is d theta, a constant equal to 0, and, also, d theta theta equal to 1. OK, so based on these two rules-- and then you can just work out the integral of any functions. So before I proceed, do you have any questions on this? Good. So for multivariable function, you just expand. Say, for example, if you have two variables, then you just have f0 plus f1 theta 1 plus f2 theta 2 plus f1,2 theta 1 theta 2. So we can also define the integra-- yeah, the differentiation of a multivariable function-- more easy. You just be a little bit careful of the direction you take the derivative. So now, let's look at the example of the integration. OK, let's look at the integral of a function of two variables. So you should keep in mind that this order is important because theta 1 and theta 2 are anticommute. So if we write d theta 1 and d theta 2, then it is d theta 1 d theta 2. It's equal to minus d theta 2 d theta 1. You want to change the order. So we can just do the integration by using this rule. You just plug in this expansion into here, and then, obviously, all these three will give you 0 because of the-- and only the last term will contribute. And then you just get f1,2, then d theta 1 d theta 2, then theta 1 times theta 2. So now if you want to do the integration, because theta 2 is before the theta here. Theta 2 is closer, and the theta 2 before theta 1-- and then you need to change the order. So we can do it by doing f1,2 d theta 1, and then you have d theta 2 integral, theta 2, and then theta 1. Now we have changed the order of theta 1 theta 2, and this gives me 1. So after this, it gives me 1. This also gives me 1. So this is just equal to minus f1,2. So similarly, you can just do an arbitrary number of integrals with-- you can just do integrals with an arbitrary number of variables. Just keep in mind that d theta 1 d theta 2 theta 2 theta 1-- this is equal to 1. So you do this first, and then you do that, and then you get the right order. Any questions on this? Yes. AUDIENCE: Here, f0 and f1, are those complex numbers, or Grassmann numbers, or both? PROFESSOR: Here, we just take them to be ordinary numbers. So they can just be complex numbers. That's right, yeah. Other questions? OK, good. So now we can look at a little bit more complicated integral. So now let's look at the Gaussian. So let's look at such a Gaussian integral. So this Gaussian integral looks complicated, but, of course, it's simple. Because again, this is when you expand it. There's only a single term left. So this is just equal to d theta 1 d theta 2, and then 1 minus this guy. Yeah, so theta 1 a1,2 theta 2. Because the next term, when you do the Taylor expansion, will give you 0. A little bit. So that will involve the theta 1 square or theta 2 square. And now if you use this rule, you just get a1,2. And so you can now generalize two Gaussian integrals with an arbitrary number of variables. And so we can write a more general integral, d theta 1 d theta 2 n, because they-- so minus 1/2 theta i a ij theta j. So now here, you should assume i and j are summed. OK, so ij is from 1 to 2n. So for this integral, again, the strategy is simple. You just, again, expand it and then just do term by term. So at a certain time when you expand to a certain order, the theta will repeat and then the expansion truncates. But you can actually easily convince yourself without doing any calculation to see what the answer is. So if you don't expand to a sufficient number of theta, you will get 0 because you have to have each theta for d theta in order for this rule. So you have to have one theta for each d theta. So that means you have to expand to the order which each theta just appears exactly once. And then if you do that-- if you look at that term, you find that this just gives you square root determinant aij determinant a. You find this actually gives you-- here, a is an antisymmetric matrix-- by definition, it should be anti-- yeah, because theta i and theta j are anticommute. So a ij should be an antisymmetric matrix. OK, any questions on this? So this looks like standard Gaussian integral. Remember the standard Gaussian integral. What would you get? If you have such kind of quadratic structure for the standard Gaussian integral, for ordinary numbers, do you remember what you get? Yes. AUDIENCE: Yeah, it's like 2 pi to some power over [INAUDIBLE].. PROFESSOR: Exactly. Right. So if theta are ordinary variables, then you will get some constant divided by square root of determinant a. But for the Grassmann number, you just solve it, and you get the proportional to the determinant-- not divided by the determinant. Good, so we can also introduce complex Grassmann variables. So, for example, I can introduce theta over the square root 2, theta 1 plus i theta 2. So theta 1 and theta 2 are considered to be 2 real Grassmann variables. i is just ordinary i, and then the theta star would be, of course, the theta square root 2 theta 1 minus i theta 2. And so the rule for defining the product-- the complex conjugation for the product-- is defined to be eta star theta star. So this is different from the ordinary variables. So this is defined more like for an operator. So for Grassmann variables, when we define the complex conjugate, we actually change the order-- as well, what we normally do for a Hermitian conjugate. So now, if you have a function, say, theta theta star, with theta now a complex of Grassmann variables, you just do the same thing. So you have c0 plus c1 theta plus c1 bar theta star, then plus c1,1 theta theta star. So again, you're just expanding theta and theta star. And then you truncate here, because the further terms will involve either theta square or theta star square. So here, for the integration, we define d theta d theta star and then theta star theta to be 1. So we choose this convention. So we define this to be what? So that's corresponding to a specific choice of a measure for this d theta d theta star. Good? So now we can look at the complex Gaussian integral. OK, now let's look at Gaussian. So if you have d theta star d theta, say, exponential minus theta star b theta-- so b is just some number-- some arbitrary number. And again, you can just do it by expanding this in power series. And just as in that example, you just find that this is equal to b. And you can also now do multiple variables-- so multiple variable Gaussian. So suppose you have now j equal to 1 to n d theta j star d theta j-- the product of all of them. So if you have minus theta i star A ij theta j-- again, i and the j should be assumed to be summed. And then when you expand it, again, you expand precisely to the order-- only one term in the expansion contribute. It's the expansion in which each theta and theta star appear exactly once. So when you look at that term, you find that this precisely gives you det A. So again, this, should be contrasted with the complex integral-- ordinary complex Gaussian integral. So if you have an ordinary Gaussian integral for complex variables, again, you will get 1 over determinant A. It's some constant over determinant A. So for Grassmann, you just get determinant A. And this feature is very key in distinguishing fermions and the bosons, and plays a very important role. So we can also consider more general integrals, more general Gaussian integrals like this. So now, let me just introduce a little bit of notation. So let me call the theta equal to theta 1 theta, say, n. Let me introduce eta. Eta is some other Grassmann variables, eta 1 eta n. And then we can consider an integral like this which we will encounter. Later, we will encounter integrals like this. Again, d theta star j d theta j-- then you have exponential, so minus theta dagger. So now, the dagger is including both the transpose and taking the star. And A, now, is just the matrix. Again, just write this in the matrix notation. But now, suppose that here I only have a quadratic term. But suppose now I have some linear term-- say, eta dagger theta and theta dagger eta. OK, suppose now I have this. And again, you can just complete the square through the Grassmann version of the complete-- the square. And then you can convince yourself that this is given by det A-- determinant of A-- matrix A-- and then exponential eta dagger A minus 1 eta. So up to the sign and i, et cetera-- this is exactly what you'd normally expect after you complete the square. You get A minus 1 here, and then you get this eta. Any questions on this? Yes. AUDIENCE: Is A still antisymmetric? PROFESSOR: It does not have to be antisymmetric. Yeah. So it can be some complex matrix, in principle. Yeah-- some general complex matrix. So I will also denote this by I eta and eta star. OK, so we only need the one last formula, and then we can talk about the path integral for the Dirac fields. So one last formula is that we, say-- let's try to calculate the following integral-- 1 over I0,0. OK, I0,0 means just this integral. Just this integral is I0,0 with both eta equal to zero. So 1 over I0,0-- suppose let's consider this integral, j from 1 to n d theta j star d theta j. Now suppose we have theta k star-- theta k-- and theta l star in downstairs, and then you have the exponential minus theta dagger A theta. Suppose you have an integral like this. So the integral like this can be obtained from this one by taking derivatives. So this is like a generating functional. So then this can be considered as 1 over-- yeah, let me just suppress these two zeros-- 1 over I. This can have minus partial partial eta star k, and then yeah-- partial partial l. OK, then take derivative on I eta and eta star-- taking derivative on this, and then after you take the derivative on each side of the eta equal to eta star equal to 0. So when you take these two derivatives, you can see that we bring down the theta k and theta l. And this sign-- just make sure you get the sign correctly. And then when you do this derivative here-- and then you find that this is given by A minus 1 kl. So this is actually the same as the bosonic case. Remember, in the bosonic case, if you have a Gaussian integral, and if you have the two variables in the downstairs, then that gives you the inverse of the matrix up to a constant. Yeah, so this aspect is actually similar to the bosonic case. So do you have any questions? Yes. AUDIENCE: Sorry if I missed it, but is there a name for that i? PROFESSOR: Just the integral. Yeah. AUDIENCE: OK, it's just that integral? PROFESSOR: Yeah. Yeah, just a shorthand notation. Yeah, you can call it the generating function. Yeah, so this is a generating function for arbitrary powers of this kind of integral, because any powers of theta-- any combination of theta k, et cetera, you can just obtain by taking derivative on this I. Yeah, mm-hmm. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, you can-- here, the complex conjugate-- in some sense, they are independent of each other. Yeah, they are independent variables. Theta and theta star, because they depend on two variables-- and you can just view them as independent variables. OK, good. So now, finally, we have the preparation to do the path integral for the Dirac fields. So remember, Dirac fields is a vector-- a spinor field with four components. There's four components, and they depend on space-time coordinates. So normally, we just say this is the ordinary functions. But now, in order to correctly capture the fermionic nature of the Dirac fields, I now require that psi alpha x. So alpha is equal to 1, 2, 3, 4. Take values as Grassmann numbers. So what this means-- so x is still our ordinary coordinate. For any choice of space-time location, x-- so this psi alpha gives give you a Grassmann number. And just for any choice of x, this just means that. And this is a function from an ordinary space-time variable to the space of the Grassmann numbers. And in particular, this means that psi alpha x psi beta y-- they anticommute. So this is the rule. So now, that's the only difference. And now we can just do the path integral. So the path integral is just completely parallel as before. So otherwise, exactly the same as before for what we did before for a scalar field. So the only difference is now the things are integrating. So in particular, we can write any correlation functions. So, for example, again, let's use omega to denote the vacuum state. And the time-ordered correlation function-- the x denotes some product of operators. Yeah, our previous notation-- and the omega-- and divide it by omega. So this has a path integral in the present description in terms of psi bar D psi, and then x in the integrand, and then exponential i S. And then so everything is exactly the same as before, except the variables that you are integrating now are Grassmann variables, and the S is just whatever your action is. So let me explain a little bit this notation, psi. So D psi just means you have, first, you have alpha from 1 to 4, and then you have D psi alpha t and x. And this notation is exactly the same as before. When I write D psi, you should imagine I take the product of all components. And then for each of them, it's just exactly the same as we do for the bosonic field definition. The only difference is that now this is Grassmann variables. So yeah, also, the x here is, say, some product of psi's-- so arbitrary product of psis-- then everything previous takes over. You can just write down the expression. So again, in order to do calculations-- for example, to calculate the propagator, et cetera-- it's convenient to use the generating functional as we do here for this Gaussian integral. So it's convenient to introduce this generating functional eta eta bar. So this is defined to be the D psi bar D psi. And then you have exponential i s, and then you write, say, eta bar psi psi bar eta. So again, when you take derivative of eta, then you can bring down the powers of psi of psi bar. And then you use as a trick if you know this one-- and then you can just calculate any x. Any questions on this? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Oh. No, you don't need to-- you can view this as a definition. We just carry over what you used as the definition. We carry over what we did before, and then we just replace everything by Grassmann variables. Any questions? Any other questions? Good. OK, so let's just try to calculate this in free theory. So in the free theory, we will eventually consider the-- so far, we just consider the Dirac theory. So let's just first consider Dirac theory. So this is general. This is completely general. S can be anything, but for the Dirac theory, we have S0 is equal to minus i. So for the Dirac theory, it's simple, because it's just quadratic in psi, and then we just have a Gaussian integral here. So the first thing to do is to write, again, this as a form of a matrix. So we have already done this before. So this has just become i d4x d4y, and then you have psi bar alpha x A alpha beta x minus y psi beta y. So this A alpha beta x minus y-- this is a matrix both in the spinor space and in the space of ordinary function-- in the space of functions. So this is x minus y slash y. Yeah, so this means you take derivative on y. So once you write it in this form-- and then, again, we have this just in the Gaussian form. So now this path integral is just in the Gaussian form. And, again, we can just generalize what we did before. Just keep in mind that those integrals-- now this is given by determinant A, and then we just find that the-- so, for example, now this Z eta-- Z0. So this means that we can see the Dirac theory. So this just can be directly evaluated, and then this is given by determinant A. So now this determinant should be understood as both in the spinor space and in the space of functions. Yeah, and then we have the analog of this term, and then we have exponential eta bar, then dot D. So I denote the A minus 1 as D in the motivation. And then the D-- so this should be considered as a shorthand notation for this. So this should be considered as-- let me just write it here. So this should be considered as a shorthand notation for d4x d4y eta bar alpha x D0 alpha beta x minus y eta beta y. So this thing is a shorthand notation for this. And this D alpha beta 0 x minus y is just the inverse of this matrix A. You just should view it as an inverse of the matrix A. And that's the same thing as what we did in the bosonic case, and this will be corresponding to the Feynman. Using the same argument, you can show that this is corresponding to the Feynman propagator-- time-ordered propagator of psi. So everything is similar. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, so you can just view-- you don't have to think about gamma 0 here. You can just think about the psi-- eta bar just as an independent variable of eta. Yeah. Yeah, it doesn't matter. You can just treat it as some independent variable. Yes. AUDIENCE: Are we treating eta as a vector of Grassman variables? PROFESSOR: No, eta is Grassman. Yeah. Yeah, eta is Grassman. And in particular, eta depends on space time because we want to be able to take a derivative to get the psi x. And so there are-- yeah, eta has the same-- the eta is the eta 1x eta 4x. Yeah, same with eta bar. Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry, say it again. AUDIENCE: Are integrals always complex numbers at the end? PROFESSOR: No, no, no. The integrals are just some functions of the Grassmann variables. Yeah, just some functions of Grassman variables. Good. And then from here, from this expression, you can just find all the correlation functions in the free theory. So yeah, let me just quickly write it here. But I don't have-- yeah, I think there is not enough space. Yes. AUDIENCE: You said that the result of integrating over a Grassmann variable is another Grassmann variable, right? [INAUDIBLE] PROFESSOR: No, no no. It doesn't have to be. It depends on the situation. Here, you get a constant. Here, you get the function of a Grassman variable. It just depends on the situation. AUDIENCE: Oh, OK. So in the path integral, it'll all work out to give a C number at the end. PROFESSOR: It depends. If you have eta-- so if you don't have eta, then you get a C number. But here, if you have eta, then you get the function of eta. And eta are Grassmann variables. Yeah, mm-hmm. Good. So now, you can just obtain any correlation functions in Dirac theory. You can just take derivative with respect to this Z. So, say, psi x1 psi xn 0-- and, again, with the previous convention, always use 0 to denote the vacuum of the free theory and then use this omega to denote the vacuum, say, if you have interaction theory. And so, again, you just get Z0 with eta set to be 0, and then you just do derivatives. Up to a sign-- just delta eta x1 delta eta xn, and then you take derivative on this Z. Yeah, so it depends on whether you do the bar or not the bar. So, for example, here is the bar. So here, psi-- yeah. And if it's sub psi, then you take derivative with eta bar. If it's psi bar, you take derivative with eta. Anyway, and then you do psi eta equal to eta bar equals 0-- exactly the same as before. And then again, because these have this kind of structure, you always pair eta bar with eta-- with D-- in here. So when you do that, you just get all possible contractions-- sum of all possible contractions. So each contraction is just a propagator. So if you have x1 bar x2, each contraction is just giving you D0 x1 minus x2. So now I suppress the spinor indices. So now you have to be careful. So one thing you have to be careful-- you say now all this becomes anticommuting. So when you take the derivative, you have to be careful-- when you do the contraction-- about the order of the psi's. So, for example, if you look at the 2-point function, if you have psi x1 psi bar x2 and psi bar x3 psi x4-- if you have this, then there are two possible contractions. You can have this contract with that, this contract with that, or this contract with that, and this contract with that. So you have to be careful about the sign. And if you are careful-- so you find you will get minus D0. So x1 minus x2 and the dx4 minus x3-- so the minus, then, comes from-- you need to exchange these two so that you have the form of psi and the psi bar-- have psi and psi bar. And then you can also have the D0 x1 minus x3 and D0 x4 minus x2. So in this one, you can just change the order. So here, it should be time-ordered. And so the reason this plus sign is here is because you can shift this x2 by two positions to the right of the psi x4, and then these two will now become neighbors. And then psi 4 and psi 2 will be neighbors. So you just get the positive sign. So you just have to be careful about the sign. Any questions on this? Other than that, everything is the same as before. So now, you can also just do the interacting theory. Say, now, the correlation function in the interacting theory divided by omega-- and now, you just do perturbation theory. We just use that, and then you can write it as the correlation functions in free theory. Again, everything is the same, just now you have to be careful now the integral is over the Grassmann variables. So T-- again, here is T-- time-ordered. Other than that, everything is just the same as before. And again, you can show that a vacuum-- diagrams with vacuum bubbles-- cancel. It will cancel so you don't have to include the vacuum bubble. And then the epsilon prescription is the m goes to m minus i epsilon. OK, very similar to before. Good. Any questions? So now, we can just apply, identically, the formalism we developed before for interacting theory of scalar fields to here, except we just have to be careful about some signs, because when you exchange the fermions, you get some signs. And other than those subtleties of signs, everything else will be the same. But those signs sometimes can be annoying. So we can derive the Feynman-- we can draw the Feynman diagrams as before and write down the Feynman rules, et cetera. So now, let me just write-- yeah, just everything carries over. So let me just emphasize the difference from the scalar case. So the difference of Feynman rules from the scalar case-- so there's various differences. So first, let's talk about the-- remember, we have different Feynman rules for Green functions and also have ones for scattering amplitudes. So the scattering amplitude is the one which the external legs truncated. And in the Green function, you don't truncate external legs. So now, let me first say for the Green functions-- so for the propagator-- again, the propagator is just represented by a line. But here, we have a complex field. So as we did for the complex scalar field, now you need to assign an arrow to indicate the flow of the direction of the charge. So we always assign the arrow to flow from the bar to the unbarred side. So for that one, we will do that. So this will be x2-- will be x1. Suppose this is beta, this will be alpha. So if I put the alpha beta here, this will be alpha x1 and the beta x2, and then the arrow goes to the-- yeah, so this is a charge arrow. This is not the momentum arrow. You can also, in additionally, assign a momentum. You can additionally assign a momentum. And the momentum-- yeah, so this is in a coordinate space. Sorry, so this is in the coordinate space. And so this just gives you D0 alpha beta x1 minus x2-- so in the coordinate space. So in momentum space, we don't label the locations. So in momentum space, we just have alpha and the beta. And now you also have a momentum. So you can choose the momentum in any direction you want. So, often, for convenience, just as in the scalar case, we often choose a momentum to be in the same arrow in this direction. But in the case which you want to choose a momentum arrow to be different, you can just draw a momentum arrow which can be different. So the momentum space is the propagator. It is given by what we have already written before. So it's given by minus 1 over i slash minus n plus i epsilon alpha beta. Yes. AUDIENCE: [INAUDIBLE] I thought of it as a particle traveling through space. PROFESSOR: Yeah. AUDIENCE: Like we were contracting two different fields, so why do I still think about it as a fermion moving through space time. PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, you can-- no, that aspect is the same as in the complex scalar case. AUDIENCE: Right, but even then-- PROFESSOR: No different from the complex scalar case. Yeah. AUDIENCE: But why do we still think of it as the same particle propagating even though it's interacting [INAUDIBLE] PROFESSOR: Yeah. Yeah, you can consider it as the particle propagating in one direction and then the antiparticle propagating in the opposite direction. You can interpret it either way. Yeah, the importance is the direction of the charge flow. Good. Yeah, so the momentum we emphasize. So the charge arrow is not arbitrary, but the momentum arrow is arbitrary. So momentum arrow-- you can choose whatever you want. So also, the rule we do for the external-- so the b-- so if we talk about the external point, we look for the propagator for the correlation functions. Say you have those external points, and some of them will be psi, some of them will be psi bar. Again, we assign the rule as this. For each external point, if it's a psi-- if the endpoint is given by the psi bar alpha x-- from the rule, the arrow always leaves the direction from the psi bar. Then we should draw it as alpha x. So this is the external point. And then, similarly, if you have a psi alpha x, this is-- and then the arrow will come in. So in momentum space, if we choose the momentum to be the same direction-- so we just have k, and we take k to be the same arrow. And here, again, it's the same thing. So this is the coordinate space, this is the momentum space. This is for psi bar, this for psi. Yeah, just follow the same rule. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, we'll talk about that. Yeah. Here, I'm talking about the correlation functions. In correlation functions, you don't talk about the particles. You just look at, what are the external points? And when we talk about scattering amplitudes, then we talk about-- I haven't talked about scattering amplitude yet. Do you have any questions? Other questions? OK, good. So now the most important-- now you have spinors, and now you have components. Now the propagator is a matrix and each spinor is a vector, so you have to be careful about the indices. So the last thing you have to be careful is that now you have to-- so the spinor indices are contracted following the arrows, and I will explain what this means. So here, it's a little bit heuristic-- a statement is a bit heuristic-- but I will explain in detail what this means when we talk about scattering amplitudes. Because the scattering ampli-- so this aspect will be similar to the scattering amplitudes that we just explained in one place. So the scattering amplitude-- so the rule follows, again, from this LSZ, which I will not derive-- this Lehmann-Symanzik-Zimmermann reduction rule. And so you consider-- say you have some initial-- you have some final particles, some initial particles scattered into some final particles. And now we specify the particles not only by their momentum, but also by their polarization. So suppose that the p1 is the momentum of one of the first incoming particles, r1 would be its polarization and et cetera. And for antiparticle, we put a bar. So for the second particle, it's the antiparticle. Then we put r2 bar. OK, so that means this is an antiparticle. And similarly, for the final state, we specify it by momentum and the polarization. And if we do a bar, again, we mean the antiparticle. The scattering amplitude is where we want to calculate things like this with some initial momentum and the-- so to specify by the initial and the final momenta and the polarizations. And then the bar is for antiparticle. So between the scattering amplitude and the Green function, the only difference is how you choose the external lines. All the propagators-- these are the same. So now, remember in the scalar case, we just remove all the external lines. But here, we have to be a little bit careful because now they have the polarization. And now we should have things to specify the polarization of each particle. So now it's reasonable to expect, and the polarization of each particle is specified by those u and v functions we derived earlier. So we truncate all the propagators for the external lines, but we have to assign polarization vectors. So need to-- assign polarization vectors for each initial and final particle-- particle or antiparticle. So let me just state the rule, and then I will motivate the rule. So for the initial state, if you consider the initial state-- then if you have a particle-- so suppose this particle is polarization-- r1. And then we draw a line like this with the arrow like that, and then assign the polarization vector, assign u r1. So suppose that these have momentum p1 and p1. And for antiparticle, say, for example, r2 bar-- say we just reverse the direction of the arrow. And then the polarization vector-- I use v bar r2 and p2. I always draw the momentum to be the same as the charge arrow. So this is for the initial state, and the final state is the following. So we just first write down the rule, then I will motivate it. So the final state-- if it's a particle, then the particle will come out. And this is s1 and, say, suppose this is k1. And then it's given by u bar s1 k1. So if it's an antiparticle, and then it's an arrow which is going out-- suppose this is s2 bar, then this is given by v2 vs2 k2. So when you do the scattering amplitude, you forget about the external propagator, but you attach to each external line a polarization vector. OK, polarization vector-- just specify u and v according to this rule. So now, let's see why this is the rule. So remember-- so let's use this as an example. So this is the initial state, then that means this is a ket. So if you remember that psi has the structure-- the au plus b dagger v-- then psi bar has the structure a dagger u bar plus b, say, v bar. So remember this structure. So for the initial state, let's try to motivate this rule. So initial state, you have a p1 r1. So suppose you have a particle like this. So this is obtained by-- so don't worry about the proportional factor. So this is given by p1, then r1 dagger acting on 0. And then this you can obtain-- you see the psi is proportional-- so in order to have a dagger, you have to have psi bar. So this is given by-- it can be obtained from psi bar by multiplying psi bar, say, by something like this. So exponential minus i-- yeah, so you can verify this, but, heuristically, you will understand. Psi bar x gamma zero u r1 p1-- acting on 0. So because psi bar has lots of other things, you need to multiply this by this polarization vector so that it will extract the a p1 r1 piece. And so this is the polarization vector you need to include here. And the reason this is the arrow which is going out is because this is psi bar. So remember, the things always come out from the psi bar-- from this rule, to come out of this from psi bar. So similarly, you can understand the other rules. Any questions? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, because you have a spin. Yeah, a spin has polarization. This is to characterize the polarization of the spin. Yeah. AUDIENCE: I'm just trying to think back-- is there a symmetry that causes us to need to specify this polarization? PROFESSOR: Sorry? AUDIENCE: So this is always the case for fermions. PROFESSOR: Yeah. Right, right. Yeah, this is always-- yeah. AUDIENCE: When I'm thinking back to the scalar piece [INAUDIBLE] PROFESSOR: Yeah, because there's no polarization. Yeah, for scalar, all this just becomes one because there's no polarization. Other questions? OK, so these are just rules you can understand. And then this antiparticle initial state will be created by psi, and then you need to multiply by psi-- by v bar on this term to extract this b dagger term. Because the v v bar, they're contracted. And then you can extract this v plus. Yeah, so that's where this will come. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, in the real experiment, it's often-- it's not easy to observe the polarized. You have to have very special-- yeah. In general, you observe unpolarized. Other questions? Good. So again, now I can specify more precisely this spinor indices rule for the scattering amplitude case. So now you see the initial state, so now you have this u bar and you have u. So the rule for the spinor indices contraction are the following. So the spinor indices are contracted by starting at the one end, so at one end of a fermionic line-- this external factor, either u bar or v bar. So you start from-- and then you go back, and then you go along the complete line following the arrow backwards. So right now, it's a little bit abstract. Now let me just explain this rule using an example. It would be very clear. Now, we'll explain this rule using an example. So now let's consider the following example. So a very important example in the development of particle physics is this called Yukawa theory, which Yukawa proposed, I think, around the 1930s, something-- and which he got a Nobel Prize for it. So let's consider you have a scalar field. So let's just say you have the Klein-Gordon-- Lagrangian density for scalar fields, and then you have a Dirac Lagrangian for psi. And then, supposed they intact, a term like this-- minus g phi psi bar psi. So this clearly is Lorentz invariance. And so, essentially, now you can just draw the propagator. So the propagators-- say, let's denote this to be the phi, and then these have the standard minus i, say, k squared plus m squared minus i epsilon. So if this has momentum k-- so we also have fermionic lines. I will draw it using the real line so this can have the form ik slash plus m minus m plus i epsilon. And then we have interaction vertices-- so minus ig. So because here, I have a psi and a psi bar-- one of the lines coming out, one of the lines coming in. So now, let's just consider a scattering process. When Yukawa considered this, and the psi corresponding to say, the proton-- and then the phi corresponding to pion. And he used this theory to explain the nuclear force between the proton. So now let's just consider the proc-- yeah, but you can also consider, say, psi is an electron or phi is a Higgs field, et cetera. So now let's just consider the process of this p. So let's denote the psi particle by p, go to p. So you have two initial states. Particles in final state are also particles. And now you have the one obvious diagram you can draw, because now the internal-- so the incoming line should be-- because the particles should go in, and then the final state, the particle should come out. So one possible diagram is like this. So suppose this is the p1. Polarization s1-- suppose this is p2 s2, and then this is k1. So let's call it p1 prime s1 prime and p2 prime s2 prime. So this is one of the diagrams which can do this, but we can also have this internal line connect with that line and this line connect that line. So you can also have a structure like this. So we have p1 s1 and p2 s2, then p1 prime s1 prime p2 prime s2 prime. OK, so these are the two possible diagrams, at lowest order. So now let's write down the expression corresponding to these two diagrams. So the first thing we can write down-- so each thing should give you a number in the end. So that's why-- but remember, all these are vectors. All these are spinors. So in the end, all the spinor indices should be contracted with each other. So that's why when we do this rule, you always start with u bar and the v bar, because these are the row vectors. In the end, you want to contract with column vectors. So that's why you always start with u bar and the v bar. So u bar and the v bar-- if you look at this rule here-- so the u bar either corresponding to the final particles, v bar corresponding to initial antiparticles. In both cases, the arrow is always going toward the point. If you start from here, you have to go backwards. So now let's look at this example. So we have two fermionic lines here. So one is this one and one is this one. OK, we have two fermionic lines. So for this expanded line, we should start with the bar. So this is the outgoing particle, so that's corresponding to u bar here, s1 prime, and here corresponding to u bar s2 prime. And then this is corresponding to-- this is the initial particle, then this is corresponding to u s1, and this is corresponding to u s2. And so we just go backwards. So now we can just write it down very easily. So we have minus ig squared corresponding to two vertices, and for the first fermionic line, we have-- for the first diagram, we have u s1 prime bar p1 prime and then u s1 p1. And so these combine into a sca-- number. And now we have the propagator. A propagator is just a number minus i p1 prime minus p2 squared plus m squared minus i epsilon. So in principle, they can have a different mass. OK, sorry. For example, here is m tilde. And then we multiply the other fermionic line. So this will be u s2 prime bar p2 prime and then u s2 p2. And now, for this diagram, we just follow the same rule. But now, this one is contracted with that, and this one is contracted with that-- so except we still have minus ig squared. So now this u s1 prime p1 prime is contracted with u s2 p2. OK, they are connected. Now they become a connected line. And, again, you have these indices, and now you work out the momentum to become p2 prime minus p1 squared plus m squared minus i epsilon. And then you have an s2 prime p2 prime, and now you have u s1 p1. Except one final thing, there's a relative minus sign between them. The reason there's a relative minus sign between them is because between these two diagrams, it's equivalent. I exchanged the order of the two external legs. So remember, if you want to exchange the order of the fermions, you have to have a minus sign. And then you have to have a minus sign between the two. Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Oh. Yeah, of course. But we never write them down explicitly. So always assume momentum conservation. Yeah. So for this diagram, the momentum-- if I draw momentum here, it will be p prime prime minus p1. So this momentum will be p2 prime minus p1. Yes. AUDIENCE: So it's probably [INAUDIBLE] PROFESSOR: Minus p1-- that's right. Yeah, that's right. Good? So just like that, you just follow this rule. You just start with the line. Always start with u bar and v bar, and you just follow the arrow backwards. And keep track all the gamma matrices and the spinor indices, et cetera. And then, eventually, you will just get a number. You will multiply with a column vector, and then you are done. So here, we are considering-- here, the vertex is very simple. It's just a number. And later, we will consider a more complicated vertex. And actually, the vertex can contain matrices. And so that will be a little bit more complicated. Also, this is a simple-- here is a simple example. We don't have a fermionic propagator. We only have a bosonic propagator here. Yes. AUDIENCE: How do you know which one's which? PROFESSOR: Yeah, it doesn't matter because the overall signs don't matter. Because in the end, we will square it. Yeah, so when you calculate the cross section, you have to take the-- yeah, in order to calculate the probability, you have to take the amplitude square. So overall, signs we don't normally bother to track. Yeah, you only need to worry about relative signs. Other questions? Yes. AUDIENCE: Where did the use of Grassmann variables [INAUDIBLE] PROFESSOR: Yeah, the sign-- so the relative sign between them. That's the only place. For this example, that's the only place. Yeah, so that's the minus sign. And, of course, the spinor later is reflected that you always have to keep track of the spinor indices. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, indeed. And then you just miss this minus sign, essentially. AUDIENCE: Could you have gotten the minus sign if you just used the operators, right? PROFESSOR: Yeah, but you have to know that you have to exchange them to get the minus sign. You have to have that. Yeah, if you don't have that-- I think when Yukawa proposed it, he actually didn't know all this detail. He just estimated. And then he predicted the-- yeah, he wanted to explain the nuclear force between the proton, and then he said, oh, maybe the two protons will exchange all between proton or neutron. Or maybe they exchange a scalar particle. And he just postulated the exchange of a scalar particle. And then from the strength of the nuclear interaction, he estimated the mass of the particle-- the scalar particle-- and then they found it. Yeah. [LAUGHS] Yeah, they found such a scalar particle which was the pion. And that's how they first discovered the pion. Yeah. OK, good. Let's stop here today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_22_Quantum_Electrodynamics.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: OK, great. So now let's talk about QED. And OK. So let's first talk about QED. And we have talked about the Maxwell action. So here is the Lagrangian density. We have F mu nu minus J mu A mu, OK? And J mu should be a conserved current because for the consistency of the Maxwell equation, OK? Remember-- J mu has to be conserved. So now, the question is what provides this J mu, OK? So now, let's imagine. Now, let's introduce some other fields. Now, imagine we have some Dirac fermions. We have some fermionic field. Same with the Lagrangian. OK. So this is the Lagrangian for the Dirac fermion. And then we discussed that for Dirac fermion-- and then there's a conserved current because there's a global symmetry. There's a U1 symmetry corresponding to psi-- goes to exponential i alpha psi with alpha to be a constant. OK? And then this leads to a Noether current-- J mu, which is conserved. OK? So now, if we want to couple the fermions to the Maxwell fields, then it's natural we identify this J mu with this J mu, OK? Because here we need the conserved current. And here we have a conserved current, OK? So it's natural to write down the theory. Then this J mu is just replaced by this J mu. That's why I use the same notation. So then that gives us the Lagrangian for QED. We have the Maxwell, which is the free photon. And then we have the free fermion. But then the fermion and the photon are coupled together through this, OK? So now, this is a cubic term, OK? So now, this is an interaction term, OK? So now, this is an interaction term. And this e can play two roles, OK? It can be considered as a coupling constant-- essentially, the e, which just determines the coupling strength between the psi and the A, OK? If e is bigger, then of course, the coupling is stronger, OK? And the e can also be considered-- so it plays a dual role. One role is you consider as a coupling. And the second role is you can consider as a unit of charge. OK? A unit of charge. Because when we talk about the Noether current, this is conserved. You can multiply this by an arbitrary number. This is still conserved, OK? This-- still conserved. And here it just means that the unit of this current is given by e, OK? So the e plays two roles here, OK? And then of course this e is just our standard electric charge. So if you take the psi to be electron, then this will be standard electric charge for the electron, and then this will be a theory which governs the interaction between the electron and the photon, OK? Any questions on this? Yes? AUDIENCE: How come different particles still have some multiple of the same charge? You might think that you could set the coupling constant to any arbitrary value. HONG LIU: Sorry? AUDIENCE: How come different particles have charges that are multiples and different multiples-- HONG LIU: Yeah, yeah. AUDIENCE: I think in this framework, you can set the coupling constant to whatever you like. HONG LIU: That's right. That's right. Yeah, yeah. So in this framework, indeed, this e can be arbitrary. And then different particles can have different charge. For example, if we couple this to electron, this will be e e. And then if we couple this to muon-- so in principle, we have some e tilde, which does not have to be the same as this e. Indeed. And it's a highly unusual feature. In nature, what we observe-- the charge seems to be quantized. There seems to be multiple of some charge. And that cannot be explained using this framework. It has to be explained using some other framework, sometimes called the grand unified theory-- can be used to explain that. Yeah. Other questions? AUDIENCE: So I'm confused. In this middle section, this J mu is the conserved current from the U1-- HONG LIU: Yeah. AUDIENCE: What was the analogy you were making? That it can do-- you could take it to do the same J mu for the Maxwell and Lagrangian? HONG LIU: Yeah. Just the natural implication. AUDIENCE: But this Lagrangian doesn't have this U1 symmetry for this fermionic field because it doesn't even exist, right? HONG LIU: Yeah, yeah. Yeah, yeah. So no, I coupled them. AUDIENCE: OK. So you're just taking inspiration now to add both of them-- HONG LIU: Yeah, yeah, yeah. Really, that's what we do here. We add both of them. Yeah. Yeah, so we have a theory with a photon and fermion. And now, their coupling is through this J. And this J is given by this one. Yeah. OK? Other questions? OK, good. Good. So we can also slightly rewrite this Lagrangian in a somewhat different way. OK? So QED can also be written in the following way, OK? So we can also rewrite this as 1/4 F mu nu. So this part, we don't change. But we can actually combine these two terms together in the following way. We write it as i psi bar, and then gamma mu. I introduce a new derivative called D mu minus m psi. And this D mu-- capital D mu psi is equal to partial mu psi minus ie A mu psi, OK? So this is the same as that because the first term here just gives you this term. And the second term-- this e A mu psi term-- just gives you this e A mu psi bar. Yeah. It just gives you that, OK? And so mathematically, these two are just completely equivalent. I just slightly rewrote it. But writing this way actually makes one new property of this theory manifest, OK? So this theory-- so we know that this theory has a gauge symmetry. But this theory only has a global symmetry, OK? This theory only has a global symmetry. This alpha must be a constant, OK? So now, when we combine together, and now this new Lagrangian actually has a generalized gauge symmetry, OK? So L max-- so L QED turns out to be invariant under the following. So still, you take A mu. Goes to A mu plus that partial mu lambda x. And then now, you transform psi also by a local transformation. You transform it by exponential i. So now, this e is different, OK? So now, you can transform psi by, actually, a local phase. Actually now, it can depend-- this arbitrary function lambda. So you can show that actually, this QED is invariant under this transformation, OK? So if I call this A mu prime-- if I call this psi prime x-- OK? So you can check yourself. With this transformation, this D mu psi-- so this D mu prime, psi prime-- OK? So D mu prime is obtained by this D mu. You replace the A mu by A mu prime. And then here you can show that this is equal to exponential i e lambda x D mu psi, OK? So it turns out, in this combination, so when you make such a local transformation on psi, when you take a derivative, then you get an extra term, OK? You get the extra term from taking the derivative on this lambda. But when you have this combination with A mu-- but A mu also transforms with additional lambda term-- partial mu lambda term. And then these two terms just cancel, OK? So in the end, this D prime mu-- D prime, mu prime-- actually transforms in a very simple way, OK? It transforms in a very simple way. Yes? AUDIENCE: Is D mu prime any different than just D mu [INAUDIBLE]? Are we changing things out [INAUDIBLE]?? HONG LIU: Yeah, yeah. We replaced the A mu by A mu prime. AUDIENCE: Oh. HONG LIU: Yeah, yeah. Yeah. So this tells you-- and then this transforms very simply. And then this transforms very simply. And psi bar-- there's no derivative here. Then this, straightforward, just transforms as the exponential minus e. And then we conclude. OK? So we conclude that L QED is invariant under this transformation, OK? Indeed, it's invariant. OK. So we call it gauge invariant. So now, this is the generalized gauge symmetry. Now, you also transform the fermions. You also transform the fermions. OK. Any questions on this? So now, we can turn it around. So let us review our logic. So we first start with the Maxwell theory. So the Maxwell theory-- there's a J here, which you need to provide. And then from the fermionic theory, you have another J. And then we just combine these two theories by replacing that J naturally from the fermions. But it turns out, when we combine them together, this theory actually has a generalized gauge symmetry. You also now can transform psi locally, OK? You can transform psi locally with this nice structure, OK? So this theory has this nice structure. And then now, you can transform psi also locally. So we can also turn this around. So we can say-- so if we call this thing to be star-- so if we require the theory of A mu and psi to be invariant under star, and then that uniquely leads to-- maybe not uniquely. That leads to the interaction of the form J mu A mu, OK? It just leads to this kind of interaction. OK? So we can turn it around. We say, OK. Using a different logic, we want to couple the photon and the psi together. We want to couple A mu and psi together. And we know that A mu previously has a gauge symmetry. And now, I want to generalize that gauge symmetry to include the fermions. And then I require the full theory to be invariant under this symmetry. And this requirement then requires that the interaction between them must have this form, OK? Yeah. Let me write more explicitly the interaction of the form. Psi bar gamma mu psi A mu, OK? So that will lead to this kind of interaction. So the reason-- even though this is just rephrase of what we just did, this rephrase is actually powerful. OK? Here it means that when we impose this gauge symmetry, we actually can deduce the interactions. Deduce the interactions, OK? We can actually deduce the interactions. We can fix the interactions by requiring certain kinds of symmetries. Yes? AUDIENCE: Is there a natural way to think about what would motivate you to assert this local gauge symmetry? HONG LIU: Yeah. That's what motivates me. Maxwell theory-- you already have a gauge symmetry. And when I couple the fermion, I just want to generalize it. OK. Yes? AUDIENCE: I guess, instead of a question, I guess, why is it that we know that it should be a scalar basis? And why couldn't it be anything else? HONG LIU: Sorry? AUDIENCE: I guess why should we know-- if you start from the gauge symmetry going to the interactions logic, then why should we know that it should be a scalar function, that it does all those [INAUDIBLE]? Why couldn't it be the [INAUDIBLE]?? HONG LIU: Because we already have this, right? We just want to generalize this. The Maxwell theory-- we already have this. We just need to generalize this to fermion. And then there's a natural generalization because the fermion is already invariant under the global phase symmetry. We just need to generalize in the way so to make it local. OK. Yes? AUDIENCE: So in quantum mechanics, when we do a single particle in an electromagnetic field, the gauge transformation on the wave function is the same. Is there a reason why that's-- HONG LIU: Yeah, yeah, yeah. It's the same. Yeah, yeah. There is a reason. Yeah. The reason is that when you go to non relativistic case, this reduces just to one particle-- quantum mechanics. So one particle, quantum mechanics should also have this kind of symmetry. Yeah. Good. So this is, in fact, not just a reinterpretation of what we are looking at so far. This is actually a very deep dynamical principle, OK? So this is, in fact, a dynamical-- a deep-- let me emphasize that this is a deep, dynamical principle. So essentially, you have gauge symmetries, or in other words, local symmetries. Can be used to determine interactions, OK? OK. Or in other words, all interactions in nature-- turns out, they are related to some gauge symmetries. So this applies to all interactions in nature. So this principle applies to all interactions in nature, OK? So here we see, in the electromagnetic interactions, it turns out the same thing happens for the weak interaction. Same thing happens for the strong interaction. And the same thing happens for gravity. They all can be formulated as a consequence of some local symmetries. And why is that the case? We don't really know. We don't really understand why, somehow, this dynamical principle should be there. But this is just a fact that all our fundamental interactions-- they can all be understood this way, OK? So we roughly understand, but yeah. But going into that will be a long story. Yes? AUDIENCE: So people sometimes say that gauge invariance is an ambiguity of the theories-- HONG LIU: Yeah. AUDIENCE: So you can shift your fields by whatever, and the theory's invariant. How can an ambiguity in your description lead to something very physical, which is being interacted-- HONG LIU: Yeah, yeah, yeah. Exactly. So we emphasized-- so when you say ambiguity, this is what we said earlier, that the local symmetries, or gauge symmetry-- they just tell you your theory that the degrees freedom are redundant. OK? Some of the degrees freedom is not important. Then you say, this should be pretty artificial, right? They just tell you you have some redundant degrees freedom. We just get rid of them. But it turns out that they actually-- yeah. Yeah. So this is part of the mystery, OK? This is part of the mystery-- why, somehow, the gauge symmetry leads to the interactions. Yeah. So Yeah yeah. Yeah, it's a long story. Let me just briefly make some comments here, OK? So now, we more or less understand that all these interactions-- essentially, all these important interactions in nature, those fundamental interactions-- they're all governed by some massless particle, OK? They're all governed by some massless particle. But somehow, a massless particle-- you actually require to have some kind of redundancy so that you can describe them in a Lorentz covariant way. Without those redundancy, you cannot describe them in the Lorentz covariant way. So in the sense, that's where that comes from. OK? Yeah. Yeah, but you make the statement which I just said. Precisely, it's actually a long story. Do you have any questions? Yes? AUDIENCE: Would it have been possible to quantize the Maxwell theory just using the E field and the B field without involving any kind of gauge? HONG LIU: We don't know how to-- so we don't know how to-- there's no good way to quantize that to give you a nice quantum theory. And we also know that physically, A mu actually plays a role. And in situations like a Aharonov--Bohm effect, the A mu plays an important role, even when E and B is equal to 0. So A mu is a more fundamental object than E and B. So E and B should be considered as a derived object from A mu. Other questions? Yes? AUDIENCE: So here it was quite natural to extend to fermions to just have to identify-- the symmetry is already there. You didn't have to engineer it, I guess. HONG LIU: Yeah. AUDIENCE: But is there a situation where extending this symmetry to another field causes an issue when you're trying to fix your gauge like you had before-- increase your redundancy, I guess, when you're making the [INAUDIBLE]?? HONG LIU: I'm not sure I understand your question. AUDIENCE: So here we had the issue in gauge fixing was from-- HONG LIU: No, no. We're not doing any gauge fixing here. We just have a more generalized gauge symmetry. We're not doing any gauge fixing. AUDIENCE: No, no, no. I'm saying, so that came out of the local gauge symmetry of our A mu's. But I'm saying when you extend that to the psi, for example-- HONG LIU: Yeah. AUDIENCE: --you get any other issues like we had from earlier or no? HONG LIU: What do you mean by other issues? AUDIENCE: So issues when you're fixing your gauge. Or is that just a step that we already did before that we don't have to worry about? HONG LIU: Oh, oh, oh, oh, oh. You mean when we-- no, no. That won't change much because we did before for the gauge fixing already. It's enough. Yeah. Yeah. OK? So now, let's talk about the other example. So if you have a scalar field which is charged-- say, complex scalar field-- you can also couple it to the photon, OK? That's what we did for the fermions, OK? So let's consider if you have a complex scalar. So consider you have a complex scalar. Yeah. Maybe you can also have some interaction terms, OK? So let's imagine we have some complex scalar. And so this is invariant under phi. Again, there's a global symmetry. U1 global symmetry. Alpha equal to constant. OK? And then we have a conserved current here-- J mu equal to minus i. So again, your Noether current. OK. So now, you can just try to do the same trick, OK? So let's just couple this theory, combine this theory to the Maxwell theory, but identify that J with this J. So naively-- so the reason I say naively is because this procedure in this case will not work. So now, we can consider the scalar QED, say, with photon couple, and again, with this minus 1/4 F square term. So let me just write the simplified version-- just write F square. And then you can add this scalar Lagrangian, OK? And now, you add the coupling E. And then you can put A mu equal to J mu, but with J mu given by this. OK? We say A mu given by this, OK? So this is the natural thing to do based on our fermion story. But now, you can check. But now, you can easily check. OK? So I will not do the exact check here. You should check yourself. So let me call this L prime, OK? L tilde because this is our naive theory. You can check that this L tilde QED, scalar QED, is not invariant-- gauge invariant, OK? Means that you cannot find the transformation on phi, OK? That this whole thing is gauge invariant. OK? This whole thing is gauge invariant. And so for example, so it's not gauge invariant. When we can check explicitly, if you just copy what we do here, this phi x given by ie lambda x phi x-- so this won't work, OK? So it's not invariant. OK. Yes? AUDIENCE: Sorry. Can you just leave phi the same when you do-- HONG LIU: No, you cannot leave phi the same. Yeah, that's a very good question. So you cannot leave phi the same. So naively, you can ask yourself-- you say this is supposed to be conserved. And we previously said if we have A mu to a conserved current, this is gauge invariant. But this is conserved only when you use the equation of motion of this theory. But now, when you couple these two to this theory, then the equation of motion for phi changed. And then so yeah. So it no longer works, OK? And so just naively, do this. Naively, this is not guaranteed to work. It turns out in the fermion case, it just worked because of the existence of this symmetry, because of the existence of the symmetry. And it turns out, this theory-- so now, this theory will not be gauge invariant. And now, this would be bad, OK? It will be bad because you can no longer find any local symmetry. This theory is invariant, OK? And this is one where the generalization does not work. You cannot find anything. You can also not find others. But this is bad because we said we need gauge symmetry to get rid of unphysical degrees freedom in A mu because A mu should only have two polarizations, not to have four. So we need those gauge symmetries to get rid of the unphysical degrees freedom in A mu. But if this gauge symmetry is broken by coupling to the scalar field, then this theory cannot be consistent, OK? So this theory is bad. OK. This theory is bad. And it turns out, we can just generalize this principle, OK? We say let's just impose gauge symmetry, OK? We can just impose gauge symmetry. So the way out-- we can just impose the gauge symmetry. Impose the theory to be invariant under this A mu, goes to A mu plus partial mu lambda. And the phi goes to exponential minus ie lambda x phi, OK? So we require this to be under that, OK? So to write the theory invariant under this, we can just easily borrow the inside we already obtained from the fermionic case. But in the fermionic case, we were told that this derivative, this capital D, actually transforms nicely under such transformation. When we make such transformation, whether psi is a fermion or boson, and whether it's a scalar or spinor does not play any role, OK? So we can now just introduce D phi. We can just introduce D phi equal to partial mu phi minus ie A mu phi, OK? We can similarly introduce D phi like this. And now, D phi then transforms nicely under this kind of transformation, OK? So D phi-- to mu phi, OK? They transform nicely. And since this transforms nicely, then we can easily write down the Lagrangian, OK? So the L scalar QED then can be written as just minus 1/4 F squared. And now, instead of the standard derivative, I use this derivative. I use this capital D derivative, OK? And then I have m square phi star. And then I can have phi phi star, OK? And so this will be invariant because those don't matter because those don't involve derivatives, OK? And so yeah. So this is now gauge invariant. OK? So this D mu is called covariant derivative, OK? Becaues it transforms nicely under this gauge transformation. So it's called covariant derivative. So this capital D mu also has a very deep connection to mathematics. And so this is related to a subject, say, in differential geometry-- a deep connection to differential geometry, to a thing in differential geometry called the connection, OK? Called the connection. And yeah, sorry. We're not going into that. Yeah. So now, let's compare this theory with that naive theory. So you can check that L SQED is almost that L tilde SQED except this has one more term. So the difference with the fermion case-- the fermion only has one derivative, OK? And this one does not have a derivative, OK? And here you have two derivatives. And each derivative gives you additional A mu, OK? So when you expand this, so here there are four terms, OK? There are four terms. And three terms are encoded in here, OK? Three terms are encoded in here and in here. But there's one more term, OK? It turns out this leaves one more term. OK. So actually, there's a quartic inaction between A mu and phi, OK? So this only introduces cubic interaction. So if you just have this J mu A mu, you only have cubic interaction. You only have 2 phi and 1 A. But here because of this structure, you actually have 2 A. Also, a term with 2A and 2 phi, OK? And when you add this term, then the whole thing is nice, OK? Good. Any questions? Yes? AUDIENCE: Can I also add higher order terms like these quartic terms into the fermionic theory? HONG LIU: Sorry. Say it again. AUDIENCE: Can I also add quartic terms or some higher order terms into the QED -- fermionic theory? HONG LIU: No, no, no. You cannot add this term because this term by itself is not gauge invariant. Fermionic term is already gauge invariant by itself-- because there's no derivative here. And so the structure is simpler. Yes? AUDIENCE: So I guess to add to that question, can you carefully engineer higher order terms such that they're-- HONG LIU: Yeah, yeah. Yeah, you can engineer more complicated terms, but not this kind of term. Yeah. AUDIENCE: This is required for-- HONG LIU: Yeah, yeah. This is required for gauge invariance. Yeah. Yeah. For both fermion and the scalar case, you can write down more complicated terms which are gauge invariant. And the one we wrote down so far is just the simplest one. Yeah. Simplest ones. Yes? AUDIENCE: So which one should of these should I interpret as the electric current? J mu there or I guess J mu plus the A mu psi psi psi? HONG LIU: Yeah, yeah, yeah. So still, so when you interpret the electric current, it's still this J mu. Yeah. Other questions? OK, great. So now, so let me just make a comment. So in this story, now the global charge before now becomes the-- just to emphasize again-- in the scalar case, again, the global charge for this global current has now become the electric charge coupled to the electro- magnetic fields, OK? Become electric charge coupled to electro- magnetic field, OK? OK, good. So let's draw the Feynman rules, OK? So from here we can just talk about the Feynman rules. So let's first do the fermionic case. So the rules for fermions is the same as before. The only thing we just need to write down-- the propagators are the same as previously for the Maxwell theory. Yeah, let's first do the QED. And the propagator for A mu and phi-- it's the same as before, OK? Because the propagator only cares about the quadratic theory. It doesn't care about interactions. Same as before. So we can draw a wavy line corresponding to A mu, OK? And then the solid line corresponding to, say, psi, OK? Solid line equals 1 and 2 psi. So this is related to A mu propagator. This is related to psi propagator, OK? And now, the interaction between them-- this interaction between them is given by this term. And then these have the structure. You have one fermion come in. You have two fermions and then coupled to a photon, OK? So this is the same, very similar to this Yukawa coupling. We can that before-- just now, this one becomes a photon, OK? This one becomes a photon. And this effective vertex is given by minus ie A mu gamma mu, OK? So the minus comes from minus here. The i comes from the i in the action, OK? When you do the passing equal action. And you have minus ie. And then phi, psi, and A mu are taken care of by those lines. And then you only have a gamma mu, OK? So the vertex here is the gamma mu. But pay attention. This gamma mu is actually a matrix in the spinor space. And so you have to be careful when you contract to the spinor indices because now, these have spinor indices. And it's a matrix, OK? Good? So this is for the fermion. And oh, yeah. So I should also mention-- for the external legs, so this is for the vertices and the propagator. And if we consider the scattering amplitude-- so for scattering amplitude, we mentioned it before. For fermion, we need to include the polarization vector. Similarly, for the photon, we also need to include the polarization vector for the external legs, OK? So for external legs, for scattering amplitude-- yeah, if you calculate the green function, it doesn't matter. You just include the external propagator. But for the scattering amplitude, then we need to include the polarization vector for photons. So for the initial state, supposing the initial state, we have, say, some vector k and alpha. So alpha gives its polarization vector. And k is the momentum. And then we can denote it as-- so alpha denotes the polarization for the-- and then say we can have k. So this is the momentum index. So this is the momentum arrow, OK? And so this [INAUDIBLE] corresponding to-- OK. OK? And so this is for the initial state. Then for the final state, if you have a photon in the final state, like this-- and then again, we just have an alpha. And so normally, we throw the momentum arrow to come out if you have a final state. And then the polarization vector will be just this star, OK? Just the complex conjugate. OK, it's very easy. OK? Yeah. So when you write down the amplitude, you just have to be careful. For external photon legs, you have to include the polarization vector. So for scalar QED, it's very similar. So here we can introduce a dashed line that's corresponding to a scalar propagator. OK? So corresponding to the scalar propagator, OK? And now, this arrow now has a meaning because the scalar has a charge, OK? A scalar has a charge. And so this arrow is not the momentum arrow you can do arbitrarily. And so there are various-- and yeah. And so for QED, let's look at the interaction with the photon. And then we have this vertex. And we also have this vertex with J mu coupled to that, OK? So then that kind of interaction will have the following form. So this J mu coupling is easy. We can schematically write it down. So again, you have 2 phi and then coupled to a photon, OK? Coupled to a photon, OK? And so A mu will have index. And so we can put the mu here, OK? And this vertex is given by minus i e k plus k prime mu, OK? Suppose the momentum here is k. And the momentum here is k prime, OK? So yeah, the reason you have this k plus k prime mu is because here there's a derivative acting on phi. And so this derivative acting on this phi gives you a k1, a k. And this derivative on phi gives you the k prime, OK? And you have the sum of these two terms. So that's why you have this kind of term for the interaction. Yes? AUDIENCE: Sorry. Going back to the fermion QED for a second-- HONG LIU: Yeah. AUDIENCE: Do you still also have, for the external legs, for fermions that are usually-- HONG LIU: Yeah, the same [AUDIO OUT] AUDIENCE: OK. HONG LIU: Yeah. So fermion-- external legs, exactly the same as before. AUDIENCE: OK. HONG LIU: Yeah. And yeah. So we only need to introduce new rules for photons. Yes? AUDIENCE: Still follow the same rules for starting from one-- HONG LIU: Yeah, yeah, yeah. That rule is exactly the same. AUDIENCE: If you got a [INAUDIBLE] interaction, [INAUDIBLE] in the right order? HONG LIU: That's right. That's right. Yeah. And now, you have to just be careful. Some of the vertices are now including matrices. Yeah. Yes? AUDIENCE: Doesn't photon polarization have to transverse the momentum-- HONG LIU: Hmm? AUDIENCE: Isn't photon polarization have to transverse the momentum? HONG LIU: Yeah, yeah, yeah. They do. Yeah, yeah. For the physical photon, yeah. Yeah, but for the physical photon, indeed, yeah. AUDIENCE: Also, why is it complex? HONG LIU: Oh. You can choose it to be complex. Yeah. For example, if you choose a spherically-- polarization, then it's a complex vector. Yeah. For the one we wrote down, it's real. But then this complex is the same. Yeah. Yes? AUDIENCE: [INAUDIBLE] this, but when you have an internal photon like in a diagram, do you sum over all four polarizations or just two? HONG LIU: Internal photon-- you always just use the photon propagator. And the photon propagator will include everything. Yeah. AUDIENCE: Thank you. HONG LIU: Other questions? Yes? AUDIENCE: For external vertices having matrices-- now, that's just a byproduct of the number of degrees of freedom you have per particle? HONG LIU: Yeah. Yeah. You mean the gamma mu? AUDIENCE: Yeah. HONG LIU: Yeah. It's just because of spinor nature, right? OK, good. So this vertex-- so A mu J mu gave you this vertex, but we also have this vertex. And this vertex is easy. We essentially just have two fermions or two scalars, but this one with two photons, OK? With two photons. And so these have A mu A mu contracted. So if we write mu here and nu here, then we will have minus ie squared eta mu nu, OK? OK. Minus ie squared. OK? So minus i for the same reason, and the e squared for the same reason. And this A mu contracted, so you have eta mu nu there, OK? Good? And depending on what is V here, phi may have some additional interaction. But suppose V is given by phi 4. OK? Suppose V, say, is equal to lambda 4 and phi phi star squared, OK? And then you will also have interactions like these-- four scalar interactions. OK. Yeah. You have two come in, two come out. OK? And this would be just minus i lambda, OK? Yes? AUDIENCE: Could you clarify-- the vertex with two scalars and one photon comes from? HONG LIU: Hmm? AUDIENCE: Can you explain where the V is a vertex with two scalars? HONG LIU: Oh, right. Right. It comes from this term. It comes from this term. So this is A mu times J. And the J has two phis here. AUDIENCE: Oh. HONG LIU: Right, right, yeah. Other questions? Yes? AUDIENCE: Would it be possible just to hypothetically, if I wanted to, give the photon mass? I'd just add a mass term to the Lagrangian? HONG LIU: Yeah, yeah. OK. What's the question? [LAUGHTER] AUDIENCE: I'm sorry. Is it possible to add a mass term for the photon in the Lagrangian? HONG LIU: No, you cannot. No because that violates the gauge symmetry. AUDIENCE: OK. HONG LIU: Yeah. Yeah. So the gauge symmetry is what ensures the photon is massless. Yeah, so that's why this is something we don't want to break. Yeah. Other questions? AUDIENCE: Sorry. This might be a silly question. But if you have the scalar QED and then the fermion QED, can you add them? HONG LIU: Yeah. You can add them. AUDIENCE: So then it's the one theory and [INAUDIBLE]?? It's combined? HONG LIU: Yeah. Yeah, yeah. You can combine them just like you have a theory-- the electromagnetic field coupled both to charged scalar and the charged fermions. Yeah, you can certainly combine them. Yeah. The reason I'm separating them is just for convenience. AUDIENCE: So can we have an interaction term directly between the fermion term and the scalar term? HONG LIU: Oh, yeah. You can add whatever. You only have to make sure things are gauge invariant. And you can add arbitrary interactions you want. Just, you have to make sure they're invariant under those gauge transformations. AUDIENCE: So on the other question, can we make another theory that has U1 gauge theory, but with massive particles? HONG LIU: With massive particles? Massive photons? AUDIENCE: With the photons. We have something that is massive and coupled to the [INAUDIBLE]. Can we do that? HONG LIU: No. No. No. You're asking whether the massive photon coupled to fermionic field exists? AUDIENCE: Yeah. HONG LIU: OK. Yeah. So this question can be answered in several-- so if you just directly add a mass term-- so let's add a mass term for the photon. OK? So you can immediately check. This is not gauge invariant. So that's the reason we didn't add such a term to the Maxwell theory because in the Maxwell theory, if you add this term, it will violate the gauge symmetry. The same reason here. So you cannot add such a term, OK? So you cannot add such a term. So this is ruled out, OK? You cannot add this. But now, you say is it possible somehow, through some other way, that the photon actually gains mass, OK? That's possible. That's called the Higgs mechanism. That's how Higgs got a Nobel Prize. And that comes from this term. So now, you imagine somehow, this phi now has a vacuum expectation value. And now, phi has a constant part. If phi has a constant path, then this becomes a mass term for the photon, OK? And then the photon becomes massive. And actually, that's precisely what's happening inside the superconductor. So in every superconductor, something like this happens. OK? And the photon inside the superconductor is massive, OK? That's what responsible for this London effect, Meisner effect, et cetera, OK? And in the superconductor, that's understood by Phillip Anderson. He was a little bit unhappy because he thought he should have claimed for the Higgs effect because he understood it earlier. And yeah. But he did understand the mechanism earlier. Yeah, yeah. But Higgs got the Nobel Prize because he predicted the particle. Anyway, that's a long story I'm not going to go into. Yeah. AUDIENCE: Just a quick question. For the photon propagator, do we give the same factor? Because I don't think we said in the previous lecture that we give the same factor as for the scalar field. HONG LIU: Sorry. Say it again. AUDIENCE: For photon propagator-- HONG LIU: Yeah. AUDIENCE: --which factor do we give? HONG LIU: Which factor? AUDIENCE: Yeah. HONG LIU: You mean-- we just use the photon propagator. AUDIENCE: Yeah. Because I don't think we did it in this lecture. HONG LIU: We did. Yeah. Yeah, at the end of last lecture. Yeah, let me just write it down. So the photon propagator depends on this parameter xi. For xi equal to 1, the photon propagator-- if I call it D mu nu, yeah. Anyway, let me just write it down. For xi equal to 1, this is just given by eta mu nu k squared minus i epsilon. So this is for xi equal to 1. So for xi not equal to 1, so yeah. So this is for xi equal to 1. For xi not equal to 1, then there are some additional terms. Yes? AUDIENCE: I might be leaving the scope of this class, but the weak force has a carrier that has mass, right? HONG LIU: Yeah. AUDIENCE: And it's called a fermion. HONG LIU: Right. AUDIENCE: So that is possible to construct. HONG LIU: Yeah, yeah, yeah. Yeah. Again, this mechanism is what's responsible for the weak interaction in short ranged. AUDIENCE: I see. Oh. HONG LIU: So the weak interaction started as massless particle-- AUDIENCE: Got it. HONG LIU: --but then through the Higgs mechanism and then the analog of A mu for the weak interaction, becomes massive, and then becomes short ranged. AUDIENCE: Got it. That makes sense. HONG LIU: Yeah, yeah, yeah. And yeah, yeah. And yeah. Good. Other questions? Yes? AUDIENCE: In the standard model is the Higgs the only scalar that couples to the photon [INAUDIBLE]?? HONG LIU: So in the standard model, Higgs is the only scalar field. Yeah. But you can have some other composite scalars, like pions. Fundamental scalars-- it's just the Higgs field. Good. OK, good. So this concludes the discussion of the QED. And now, ready just to study the physical processes in QED, OK? And before doing that, we need to develop a little bit of formalism because when we talk about physical process, then we need to make connection. Yeah. QED is real life, OK? And then we can make connection with the real experiments. OK? To make connection with the real experiment, we need to develop one more thing. We need to develop one more thing. So we need to learn how to calculate the cross-section, OK? So let's talk about how to calculate the cross-section. So this is a digression, OK? So first, let's remind you how we define a cross-section in nonrelativistic quantum mechanics, OK? In the scattering theory of nonrelativistic quantum mechanics. So in nonrelativistic quantum mechanics, you consider there is a target. The target is normally considered to be a point, OK? Here I just make it big so that we can see it. So here is the target. And then we define our axis-- the scattering axis, which the particle come in. So this is some incident beam of particle. OK, coming into this direction. So say this is a Z-direction, OK? And moving in this direction to-- and then it will interact with this particle. And then it will scatter, OK? So the particle will scatter. And then we put the detector around this target particle to detect the scattered particle. OK? So for example, yeah. So you should view this solid angle here. OK? And so this is the phi direction. And so this is the theta direction, OK? So here is the theta direction. And this is some small solid angle, which the detector-- suppose there's a detector here, OK? Suppose there's a detector here. And then this detector spread the solid angle, D omega, with respect to the target, OK? So this is the standard setup for nonrelativistic-- scattering process in the nonrelativistic quantum mechanics. So what we measure is dN dt outgoing particle, OK? Scattered particle in the theta and phi direction, OK? So this is theta and phi. This is where the detector locates in the theta phi direction. So this is the number. So this dN dt is the number of particles-- number of scattered particles, number of scattered particles in -- per unit time in the detector. It means registered by the detector, detected by the detector at the location theta phi, OK? OK? So the quantity we measure is this quantity. We just put the detector there. We just measure particles, OK? And clearly, so this scene depends on many things, OK? This quantity depends on many things, depends on the interaction between the incident particle and the target, et cetera. But they also depend on many other kinematic factors, OK? So for example, it will depend on -- this out-- clearly, it will depend on the number of incident particles. OK? So this is the number of incident particles per unit time. If you have more incident particles, of course, you will detect more outgoing particles, OK? And of course, this will also be proportional to D omega, the solid angle extended by the detector. So if you have a larger detector, of course, you will detect more particles, OK? And then if we divide it by those obvious kinematic factors away, then we will get something more intrinsic to the interactions, OK? So we can consider-- we can divide by D omega and divide it by this thing. So this object, dN dt. Then we divide it by D omega per solid angle out and divide it by the incident number of particles per unit time. So this should give us the probability of a particle to be scattered to theta phi direction, OK? So this is the quantity which captures the effect of the interaction, OK? So this is a more intrinsic quantity. But experimentally, this is actually not the thing we directly measure in the experiment because in the experiment, it's not easy to count the number of the incident particles, OK? We always consider a beam of particles. We always consider a large number of particles, a beam of particles. So experimentally, it's more convenient. We often use the incident flux. It's the number of particles per unit time and per unit area, OK? So we just send the beam. And we just need to know the density of the beam, OK? So that will give us the flux, the number of particles ingoing per unit time and per unit area. So A is the cross-section of the incident beam, OK? So now, this is the quantity. And now, we have to divide-- so instead of dividing by this one, we divide by this one. OK? So now, this D sigma D omega, which is defined to be dN dt D omega out-- and then you divide it by incident flux, OK? Incident flux per unit time, OK? So that's the object we actually measure, OK? So some comments on this. So the first comment is that this has dimension of area. So because this thing is dimensionless-- this thing is dimensionless because T canceled, but now, we have divided by dA. So A is going up. And then this has dimension of area. So the physical meaning of this, which I'm sure you learned in your nonrelativistic quantum mechanics class, is that this gives you the physical meaning of this object. So this gives you the effective area of interaction, OK? So heuristically, you can imagine the following. So if you imagine the incident particle, this target particle-- they actually have some short range interaction, OK? And then they no longer interact, say, outside some distance. And then heuristically, then this quantity then can be considered as-- give you the area of interaction around this target particle, OK? Around the target particle. So that's why this D sigma D omega is called the differential cross-section. So the area which is going into the direction of theta phi-- and the sigma total, which is defined to be D sigma, D omega-- you integrate them over all angles, OK? So this is a function of theta and phi. And now, you integrate over all solid angles. And then that gives you the total cross-section. OK? And to see this effective area of interaction, can be seen in the very simple classical example. So this definition actually is not quantum mechanical. You can also do it classically, OK? It's not restricted to quantum mechanics. This can be defined for any scattering event, including in classical mechanics. So for example, if you have classical scattering, say, of some bullets, off a billiard ball, and then you find the sigma total-- so suppose you have classical scattering, OK? And then you find the sigma total. It's just the cross-section. It's just the total area, total surface area. Yeah. No. It's the cross-section area over the billiards, OK? Because classically, when you hit the billiard ball, then you hit the billiard ball, OK? If you don't hit the billiard ball, then you don't change direction. And then the total cross-section just is the cross-section of the billiard ball, OK? And so when you go to quantum theory, then this is the heuristic way you understand there's some kind of effective cross-section for this target, OK? And outside that area then you no longer have-- yeah. Heuristically, you no longer have interactions. OK. So now, let me just make two quick remarks in a relativistic theory. So now, we can generalize. So this discussion is in the nonrelativistic context. So in the relativistic context, we essentially just have D sigma cross-section from initial state to final state, OK? We no longer have-- so this is very specific. We have one particle scattering the other particle. And this particle scatters away, OK? But in relativistic, you can have two particles. You can create 10,000 particles, OK? And so this language no longer works, OK? So but still, you can define some kind of cross-section of initial state alpha to some final state beta, OK? And so we will be interested in the situation in which we have two initial particles, OK? Because experimentally, it's convenient just to have two particles scattering. So you have two initial particles. But final state, in principle, can be arbitrary, OK? And so this object-- since, in the relativistic theory, this should be physically observable, so this should be-- so because this essentially is a measure of the probability of the scattering, so we require this object to be Lorentz invariant. OK? And also, in the relativistic, we just scatter two particles. There's no notion of what is the target or what is the incident particle. So these two particles should have equal rho. So this should be symmetric in 1 2, OK? So if you exchange 1 2, this cross-section should be the same. OK? It should be the same. So now, if you consider-- but you can always consider in the rest frame, say, of particle 2, OK? So suppose we consider the rest frame. Suppose we go into the rest frame of particle 2. And then we roughly have this situation, OK? So this is particle 2. This is the particle 1, which comes to particle 2. And then in this situation, we can define the alpha to beta based on our nonrelativistic considerations. And then we consider-- so here I use the probability from alpha to beta dP, OK? So this essentially replaces that. OK? Here the final particle only has one particle. So you can define a solid angle. But if we have 10,000 particles here in the beta, of course, I cannot define a solid angle, OK? So I can just talk about probability from alpha to beta, OK? From alpha to beta, but per unit time. And then again, we divide it by the incident flux of 1. OK? The incident flux of 1, OK? Oh. And let me just call it in. And 1 means the number of particles of 1, OK? So yeah. So what we will do is that we will first calculate this object in the rest frame of 2. And then we construct the Lorentz invariant version of this, OK? So that's our strategy to define what will be the relativistic generalization of this. OK. So we will discuss this next time. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_11_Computation_of_Correlation_Functions_in_Perturbation_Theory_and_Feynman_Diagrams.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: OK. So let us start. So first, I'll remind you the master formula we derived before for path integral. We said, if we consider time-ordered anything, OK, so let me just call it x will be, say, some product of operators. And then this can be written as, in the path integral form, D phi. And then so x, it just goes into here. And then you have is, OK? And then divided by D phi just without x just is, OK? So this is the master formula we derived before, which applies to any theory, OK, in principle, any theory and any x you want, OK? And so things inside this x should always be time ordered, OK? So this T means time order. So now we can apply this to interacting theory. So for interacting theory, we have S. We write S. So the action to be the free theory action plus the interaction part, OK? And the interaction part, in turn, can be written as say the integration over the interaction part of the Lagrangian density. And it's also the same as the minus the interacting part of the Hamiltonian integrated over time, OK? So the reason for this simple relation is because we assume the interacting part does not contain time derivatives. So essentially, interacting particles binding to a potential in your Lagrangian density. And then in the Hamiltonian, they just differ by a minus sign. OK. So now we can just apply this to this case, OK? So let's see. So we can just write this object. So now we can write it as-- also come here. So our goal is that we assume the Si is small compared to S0 because we cannot solve this theory exactly. And so what we are going to do is we want to treat Si to be small and expand in the cautioning Si. OK, so example, of Si is this lambda phi 4, which we wrote down before. OK. So for this purpose, then we can just write this more explicitly as D phi. Then you have x. Then you have i S0. Then exponential i Si. OK? Then divide it by phi exponential i S0, i Si. OK? And now so we can now treat this as the integrand for the-- yeah, so this is the integration for the free theory. OK? So now we can integrate this just as a integrand for the free theory, as some, say, x prime. So x times the exponential i Si as another x for the free theory and similarly here, OK? So we can write this as follows. So we can write this as now in free theory. So upstairs should corresponding to the 0. Yeah. Yeah, let me just not worry about it at the moment. So now you can write it as the time-ordered x exponential i Si 0. Now this is in free theory. So I put a subscript 0 here. Means in free theory. OK? So in writing from here to here, so upstairs, we interpreted as a correlation function in the free theory. So here, we should also divide by-- yeah, in order to write in terms of the expectation value in the free-- yeah, so 0 here is the free theory vacuum. So in writing the upstairs in this form, we should divide this by a path integral just with the exponential as 0. But since upstairs and downstairs we need to divide by the same path integral, they just cancel, OK? So I can just directly write down as the ratio of these two correlation functions. OK? And so now this is a very nice form, OK? Because so this is-- yeah, let me call this equation star. So this is the star is the exact expression. OK? Which writes correlation function of an interacting theory in terms of those of free theory. OK? So now these are just some correlation functions in the free theory. So both upstairs and downstairs, they can, in principle, evaluated as we discussed before in free theory. Yes? AUDIENCE: So I'm a little confused. What is there to time-order for the phase for the e to the i -- with the interaction? HONG LIU: Right. Right. As we discussed before, whenever you see time-ordered exponential, you should always expand the exponential in power series. And then you just have a bunch of factors. And you just time order them. Yeah. Other questions? OK. So yeah. And this is precisely the expression, which derived in Peskin. OK, so this is the-- so this equation is the 4.31 of Peskin and Schroeder, which they derived using the interaction picture, OK? So we see that for the path integral, to derive this equation using path integral is just trivial. Essentially, you just split the exponential for the full theory into that of the free theory and the interacting part. And then you automatically derive this equation. OK? So once you have understood how path integral works and then derive this equation becomes trivial. And so this is actually general. Later, the same expression, we will be able to apply it for, say, when we include fermions and also photons, et cetera, OK? And now we will evaluate this guy using perturbation theory, OK? So we will expand star in power series of Si. OK, we treat Si as small. And yeah, just expand it in power series. So more explicitly, say, we consider the Gn, so this Gn for the full theory. So let me call this Gn, just say this is some n-point function. OK, so this is the Gn. And then we can just, more explicity, we can just write Gn as the 0th order. You just have T X 0, OK, upstairs. And the next order would be i dt say 0 T X H I. So I have expanded to the first order. In Si, when I expand Si, then become dt times Hi with a minus sign. So I just write it in terms like that, OK? And so remember here, you need to time order them together. Because both of them are in the same integrand, OK? So we need to time order them together, and et cetera. OK, so this is upstairs. And downstairs, you just expand this guy. So essentially, you have 1 minus i dt. Then you can time-order the Hamiltonian, the interacting part of the Hamiltonian. And then plus higher order. OK? So you just now essentially do the Taylor series expansion. And yeah. And the first-- so you always assume the interacting part is small. So you can actually expand the downstairs also. So this will be reduced to, say, the first order. So the 0th order just gives you the free theory n-point function. OK, so I denote now with a subscript 0 means the free theory correlation function. OK? And then yeah, and then you can add the rest, et cetera. OK. OK, so the higher order perturbations. OK. Good? Any questions on this? Yeah. Actually, for later purpose, let me just write one more term just explicitly. Let me just write it. So if I bring this up, so when I bring this up and then this becomes i 0 T X 0, then times dt 0 T H I. Also, so this is this term we bring up. And then multiply that term. That will give you that. And then there's another first order term is given by i dt 0 T X H I 0. And then so these are the full first-order terms. OK, so these are the full first-order order terms, which is the correction to the free theory n-point functions. And we need to evaluate those quantities. And all those correlation functions in the free theory can be just evaluated using the Wick theorem because we know how to-- because all correlation functions in free theory just can be evaluated using Wick theorem. Just use the Wick theorem over and over. OK. So any questions on this? So now we will develop techniques to evaluate such kind of series, OK? Still, even though this is not too difficult to do now, but still when you do it in practice, it's pretty tedious. So still worth making effort to actually simplify the process, OK, simplify the process. And the technique to simplify is called the Feynman diagrams. OK. So now we will consider, so far, those equations apply for any theory, any kind of interaction, OK? So now let's work on for a specific case. So let's consider Li equal to minus lambda divided by 4 factorial phi to the power of 4, OK? So it means that H I would be integrated over-- so H I, yeah, so yeah anyway, yeah, the others you can just directly right from here, OK? So let's consider this case. And so to be specific-- and now also, let's consider-- just in the simplest correlation function-- let's just consider the two-point function. Let's just consider the two-point function, G2. So two-point is also the Feynman function. But now this is the Feynman function in the interacting theory. So now we have two-- so imagine we have two coordinates-- and then is equal to -- now. So let's consider this. So now let's consider the Feynman function in the full theory. And now we can just now apply this equation. So let us try to calculate to the next order. So the 0-th order that's given by the free theory, two-point function. So by translation symmetry, it's always a function of x1 minus x2. So this does not depend on whether it's in the free theory or interacting theory. And now let's just plug in here. Let's just plug in here. So H I is integration of this over a three-dimensional spatial. And then combined, this become just four-dimensional spacetime integrals. So then you just get the-- so this term becomes plus i lambda divided by 4 factorial. So we will assume this lambda is small so that we can expand this lambda. And this term, just again, is the free theory correlation function. So we just have GF-- again, 0-- x1 minus x2. And then times-- my board is a little bit-- so we have to go through here-- times-- then you have d4 x then 0, T, and phi to the power 4 x here. So this is multiplied with that. So that's the second term. And the third term is minus i lambda to the 4 factorial. You have d4 x. You have 0, T. Then you have to combine them together. So this is phi x1, phi x2, and phi to the power 4 x-- and this x is integrated over-- And then 0. And so this is all time ordered. And then the rest is all times lambda squared, plus lambda squared. Good. Any questions on this? So yeah, so let me just make a quick remark. So in the end, so just note can be-- so yeah. So here, even though we only write down the first-order term, but you can continue. So all terms in the expansion can be evaluated by repeated apply the Wick theorem. So any such kind of functions, they all factorize into two-point functions. And the two-point functions are essentially just given by GF 0, just the GF 0, free theory two-point functions. So the final result always have the form-- final result always have the form sum over products of GF 0's-- and plus some integration, so sum of the-- plus possible integrations. So, in general, the structure is pretty simple. They all just reduce to the product of those two-point functions. So now, as an example, let's just evaluate these two pieces, evaluate these two pieces. So let's continue. So we find that the GF x1 minus x2-- yeah, so GF is the same as G2. So for two-point function, this is just the Feynman function for the interacting theory. So the leading term is just GF 0. And then now let's evaluate this term. Now evaluate this. So this is all phi contracted on the same point. So there are four phi's. We could just contract them within themselves. So we essentially get to the-- so you have GF. And then the next term is i, 4 factorial-- i lambda 4 factorial. And then you have GF 0. So x1,2-- so let me use x1,2 to denote x1 minus x2 just to save a little bit effort. And then we have this integration over d4 x. And then let's evaluate this. So this just reduce to two Feynman propagators. But both of them have 0. So essentially, you just have GF 0, 0 is x minus x. So we just get that, x minus x. So let me just write one of them. But we are actually missing some factors because we have four phi's. So when you contract the four phi's, there's three different ways to group them. so We should have a factor of 3. There's three different ways to group four phi's into two groups. Then we have a factor of 3. OK, so now let's look at this term. Now let's look at this term. So this term, now have now we have to contract six phi's, so phi x1, phi x2, and then there are four phi's here. So we need to contract six phi's. So we can have two possible patterns. Inside this, two phi's contract with themselves. These two phi's-- this phi x1, phi x2 contract with themselves. And then four phi's will have to contract among themselves. Or phi x1, if phi x1 contract with one of the phi here, then phi x2 also have to contract with one of the phi here. So you have two possible patterns for contraction. So the first one is important, is just first contract phi x1 and phi x2. So again, so we have minus i lambda divided by 4 factorial. So when we contract x1, x2, then we just get GF 0 x1,2. And then we contract phi 4 themselves. So that's what we did before. It's the same thing as above. So we just get a factor of 3 d4 x and then GF 0. We evaluate at 0 squared-- just the same as this. But then the other term, phi contract with this-- so each of these, x1, x2 contract with here. So in this case-- so let's imagine this contract with them. Then there are four possibilities. So after this contract with one of them, this contract with that, then I have three possibilities. So you have 4 times three combinatorial factors. So you have i lambda 4 factorial. Then you have 4 times 3. Then you have d4 x. Then here, we have GF 0 x minus x1-- so x1 minus x. It doesn't matter-- and GF 0 x2 minus x. And then the last one just contract with self, just again, GF 0, 0-- times-- and order lambda squared. So these are the explicit expressions. So these are the explicit expressions. So now you can actually-- notice that these two terms actually are the same, right? So these two terms actually are identical. And they just differ by a sign, so actually, they cancel. So these cancel. So we just left with that and then this term. We just with this term. So this procedure can actually be represented diagrammatically, be represented diagrammatically. And I will just introduce a single line, just introduce a single line for GF-- say, x and y-- for GF 0 x minus y. So we just represent the line like that. So if I use that-- and then we can write them in the diagrammatic form, then the first term is just x1, x2. Another second term would be 3 multiply 4 factorial. So this is i lambda divided by 8. And then you have x1 times x1, x2 then times-- this contracted with itself. And so you have two propagator. One is contracted like that, and one is contracted like that. And this point is x. So you have x, but they all contract with itself. So it's x minus x for this one and x minus x for that one. And then similarly, of course, this is identical to that, is i lambda divided by 8. So this is times, so x1, x2 x. And then the last one-- so this one corresponding to-- so now this is 12 divided by 24. This is minus i 2 lambda-- i lambda divided by 2. So x1 is to x, x and x2 to x. And then you have something like this, x1, x2. So this here is x. So x1 x, x2 x, and then x to itself. And then you have all the lambda squared. So this-- again, these two terms-- as we already said, these two terms cancel. And then you just get these two terms, just get these two terms. And we see that each factor of H I or each factor of S I you bring down-- so each factor of H I or each factor of-- you bring down corresponding to a interacting vertex like this, corresponding to interaction vertex like this. You have four legs or phi's. And so we call this interaction vertex. So in practice, it's actually much easier to just draw the diagram first, and then from the diagram, to write down the analytic expression. Because the diagram is much more intuitive. Essentially, what's happening here, whatever terms here, you just find all possible way to contract them. And then diagrammatically, we can just do it diagrammatically through all possible way to contract them. That's much easier to do by drawing a diagram than by doing the counting. So in practice, much easier to draw the diagram first. You just draw all possible diagrams. And then from diagram, write down the expressions. So those diagrams are just called the Feynman diagrams. So we just draw possible diagrams. And then from the diagrams, we write down the expressions. That's called the Feynman rules. I gave you a diagram. Then there's a rule which convert them into such kind of expression. We can convert them into such kind of expression. So now let's give you an example. So now let's give you an example. So now let's look at the example in the second order. Let's look at the example in the second order. So let's consider order lambda squared example. So the lambda squared, there are many terms. So one of the term is just to say you now have H I squared here. You also have downstairs, et cetera. There are many terms. But one of the term-- yeah, let me just say here. So one of the term just x and then you have H I squared here. This is one of the term. They also have other terms. So let's now look at just that term as example. So at the lambda squared example, there's a term like this. 1 factorial-- 1 factorial, when you expand this to the second order, then this-- yeah, a 1/2. And then you have minus i lambda 4 factorial squared because now this coefficient also expand to second order. And then you have d4 x d4 y x1 x2 phi 4 x phi 4 y here. So now one of the term is like this. So later, we don't even need to write down such a term. We just immediately write down all the diagram. Just here, to help you to understand the process, let's just say let's look at this particular term. So now let's try to write down all possible contractions by just using diagrams, just by using diagrams. So now we have x1, x2, and the four phi-- four x and four y's. So essentially, you have something like this. You have x1. You have x2. So these are the two external points. And then now you have four phi's coming from x and four phi's come from y. So we need to connect, find all possible ways to connect. We find all possible ways to connect. So that's the goal. So yeah. So here are the all possible ways-- so here are the all possible ways. So first-- there are actually five in inequivalent way. So let me just write them down. So the first one is you just connect one of them to x and one of them to x-- yeah, one of the x-- yeah, let me just draw them down. OK. So one of them is the following. So you take one of the x-- so there are five of them. Let me write equal sign here. So each one of them, let me give a label for later use. So here, you can have, say, x1 and x2. Then they can connect to x. And then x can connect to y. So this is one of them. And the second one possibility is you just x1 connect to x and x2 to connect to y. And now there are three-- between x and the y, we have to connect. So we have something like that. So this is the second. So the third-- and the third way, if you connect x1 to x2 itself and then times-- and then you have to connect x and y. So there are four of them. Just connect x and y like this. You can connect them like this. Or you connect x1, x2 itself. And then you, again, have to connect with x and y. Or you can connect. Then the y-- and that's just between x and y. You can also have that. So some of x is contracted y, and some of x is contracted with itself. Or you have the last possibility. So you have x1, x2. But then the x just contract with itself. And the y also contract with itself. So any questions on this? So these are essentially all possible contractions you can have between them. We just draw the diagrams. But now we also have to calculate-- so we also have to find the combinatorial factors between them. We also have to find the combinatorial factors between them. So now let's try to work out the combinatorial factors. So let me just give you some examples. And then the rest, I will just write down the answer because it just take too much time to do all the examples one by one. So let's do this one as an example. So yeah, let me just erase here. We need space here. So here, actually, there are two possible diagrams. I can have x1 and x2 connected to x. I can also have x1, x2 connected to y. Because x and y are symmetric. So I only draw one of them. So that means there's a factor of 2. I can either connect to x or connect to y. So I have 1/2. So let me also copy this, lambda 4 factorial squared. And then I have a 2 factor. x1, x2 can either connect to x or connect to y. So x1 have four possibilities to connect to x, and x2 have three possibilities to connect to x. And then the remaining phi x then have four possibilities to connect to y. And the other remaining phi x have three possibility to connect to y. So essentially, that's your prefactors, combinatorial factors. And so if you worked it out, so that give you minus lambda squared 1/2 squared. Yes? AUDIENCE: Why were there four-- so after we x1 and x2 to x, why were there four remaining? HONG LIU: Yeah. It's because now you have two phi remaining, two phi x remaining. So two phi x remaining have to connect with y. So the first phi, we have four possibility to connect y because there are four y's there. So that's this factor of 4. And then the other phi x have three possibilities to connect with y. And so that's the factor of 3. And once you do that, then this y just have to connect with itself. Yes. AUDIENCE: Do we have a diagram x1 to x, to x to y, y to x2, and x loop and y loop? HONG LIU: [LAUGHS] Which one? Say it again. AUDIENCE: Wait, so you should just draw it on the board. AUDIENCE: x1 to x. HONG LIU: x1 to x. AUDIENCE: And then one loop of x, x to y. Loop at y. And y to x2. HONG LIU: And y to x2. AUDIENCE: Oh, we didn't include the-- HONG LIU: Yeah, let me see. Yes. I should also have a diagram like this. Yeah, it's possible. I think I just forgot it. Let me see. Do I have something like this? No, actually-- yeah, I think forgot it. Good, good, good. This diagram is here. OK, so this is b prime. [LAUGHTER] Thank you. [LAUGHS] Thank you. So we can also have that. Yeah. Here, it's really for the illustration. I don't-- actually, I should claim at the beginning, I didn't aim for completeness and just for illustration. Because if we try to be too complete, then that may take too much time. OK, good, good. This is also a possibility. Yes, good job. And so now let's look at this one. So now let's look at this one. So again, let's copy this 1/2 minus i divided by 4 factorial squared-- and then the combinatorial factors. Again, I can have the freedom-- so x1 either connect to x or connect to y. That two diagram would be the same. And so I have two possibilities. Here, I only draw one of them because I just flipped them because x and y are symmetric. So I have a factor of 2. And then the x1 will have four opportunities to connect with four of x. And x2 have four possibilities to connect four of y because x2 connect with y. So I have two factors, 4 and 4. And so after that-- so phi x have three left and phi y have three left. And then take one of the phi, then have three possibilities you can connect to y. And the other one have two. And then that's it. That's it. So that all the possibilities. And if you work it out-- so this gives you lambda squared divided by 3 factorial. And then-- so as an exercise, you should work out this one yourself. And yeah, which I didn't work out, but I did work out this one. So this one I also give you an exercise, so this is minus 1/2 lambda squared 4 factorial squared times 4 times 3 times 2 equal to minus 1/2 lambda squared divided by 4 factorial. And so this one is the-- OK, minus 1/2 lambda squared 4 factorial squared times 2 times 3 times 2 times 3. So that gives you minus lambda squared divided by 16. So this final one is minus 1/2 lambda squared divided by 4 factorial squared times 3 squared-- and then given by minus lambda squared divided by 2 1/8 squared. Good? So yeah. So I write them down just for you to check yourself later. So I let you do this as an exercise for yourself, to do this exercise later yourself. So here, we can already observe some patterns So here, we can observe some patterns. And so you can do this for each individual diagrams. But if you try to count this way, if you try to count it this way, as we are doing here, even though it's not difficult to do, but it's still tedious. It's still tedious. Each diagram have to count the factor of 4, 3, et cetera. So it'd be nice if you find a better trick. So now there's a better way to do it, so the better way to do it, so better trick. So it's by notice the following-- so when you go to the lambda n-th order, so there's always-- there's an n factorial coming from expanding the exponential. So because we are expanding the exponential and the-- so there's always a factorial come from the expanding of-- so now at n-th order-- yeah, I think I erased my vertex. So at the n-th order, you have an n factorial from exponential, but you also have n vertices. Because each power, each lambda comes with a factor of these vertices. So now when you permute all these n, so when you permute the n interacting vertices-- and you also get the factor of n factorial because they're all symmetric. They're all symmetric, doing the contraction. And if you have one way to contract, and then you can have another way to contract by permuting all these different vertices. So then these two factors cancel. These two functions are also modulo. Modulo symmetries in permuting vertices-- if there are symmetries in permuting vertices, then, of course, you don't get n factorial. Because some of the permutations give you the same diagram. And so the modulo symmetries in permuting vertices-- then you have this n factorial. So this is the first observation. So the second observation is that for each vertex which you have minus i lambda divided by 4 factorial phi 4-- so if each phi in this phi 4 contracts differently, contracts differently-- so all these four phi's are symmetric with each other. There's not any phi which is special. But if each phi contracts differently, then again, you permute the phi's, it should be the same-- should lead to a-- that diagram should also be included. And then you can permute phi. Permutation of 4 phi's then leads to 4 factorial. So, again, modulo-- so this 4 factorial again cancels with this 4 factorial. So the two 4 factorial cancel modulo symmetries in permuting phi's coming from the same vertex. Again, so if these two phi's are contracted the same-- of course, when you permute them you don't generate new diagram. You don't generate new ways. So now we have two different permutations. One is to permute the vertices, and the second is to permute the phi's within each vertex. So that means both of these n factorial and these 4 factorial, they cancel. So we can forget-- so this means forget about the n factorial factor from exponential. And we also forget about this 1 over 4 factorial. And then treat each vertex minus i lambda, just coming from a factor of minus i lambda. But then this way, we over-count. So this way, we over-count. And then we need to divide-- so we need to divide. And then divide by symmetry factors from permuting vertices and the legs. Yeah. OK. So now let's go back to this diagram. So now let's go back to this diagram. So in this diagram, there's no symmetry between x and y. Because x and y, they're not symmetric. So we don't need to divide-- so there's no symmetry factor associated with permuting the vertices. But for the vertex come from the x, these two legs are symmetric. So there's a factor of 2 permute them. And for this thing come from y-- again, there are two of them-- are symmetric. so That's why we divided by 2 symmetric factors, so this 1/2. And similarly, from here, again, the x and the y, they are asymmetric. Because one is contracted with x1 and one is contracted with x2. So there's no symmetry factors from commuting from vertices. But between x and y, there are three legs which are symmetric. When we permute, this three legs, and then we have three factorials. So we have three factorials. And similarly, I leave as exercise for you to do the other diagrams, to do the other diagrams. So now we are ready to just write down our rule. So now we have drawn the diagrams. And now we can use the diagram to write down expressions. So the rules of writing down expression from diagrams are called the Feynman rules, the Feynman rules. So here is the Feynman rules. So here is the Feynman rules So these are all very intuitive. So for each external point-- so here, we have some external points. So each external point, you associate it with a line. So here is the, say, for example x1. And then you just associate with a factor of 1. But there's always a line coming out of the external point because you have to contract with that one. And then for each propagator, for the propagator, we already wrote down. You just have-- say, you have two endpoint, then that's corresponding to GF 0 minus y. So each such propagator, you can just write down a factor corresponding to GF y. And then for each vertex-- I think I can erase them. So for each vertex-- so at point x-- so you associate with lambda and then integration over x. And then the last step, you divide by symmetry factors, symmetry factors for diagram. So you just draw all possible diagrams using this kind of rule. So for each propagator, you have a line, and then you have vertices. So now, from each diagram, by applying this rule, we can also write down the analytic expression. We can write down analytic expression. Any questions on this? So exercise for yourself to write down analytic expression for each of them. So yeah, let me give you an example. So for b, as an exercise, you should try to do yourself for each of them. Let me just give you one example for b. And then you have first the symmetric factors, lambda squared divided by 3 factorial. And then you have d4 x d4 y come from these two vertices. Yeah, I should just write it like this. Sorry. Yeah, let me just strictly follow this rule. OK. So we have two vertices. So each vertex, we have i lambda squared. Then you have d4 x and d4 y. And then for each propagator, we associate these. And then we have GF 0 x minus y, then cube. And then finally, we divide by symmetry factors because there's three permutations of them. So we divide by 3. So this is the analytic expression corresponding to that diagram. So later, you don't even have to write down such expression in the top line I wrote down there. You just, at each order, you have certain number of external points. And then you just have a number of vertices come from n-th order. You just have n vertices. So you just try to connect all possible lines between them. And then that's all possible contributions. Good? So this is the Feynman rule in coordinate space. It's also convenient to go to the momentum space. So it's also convenient to go to momentum space. So to go to momentum space, then we can define the Fourier transform. So let's define the Fourier transform. So suppose we have an n-point function. Suppose we have an n-point function Gn. Suppose we have an n-point function x1, xn. So this is just the standard Fourier transform. So because of the translation symmetry-- so we expect all the piece must be conserved. Just from the conservation, I will derive it. But you should also be able to expect on the general grounds from the translation symmetry. So we expect that there must be a factor of the momentum conservation coming from here. And then we call that coefficient Gn P. So our convention is that for momentum space correlation functions, we defined without this factor, without this factor. Yes? AUDIENCE: For the Feynman rules, shouldn't it also restrict to connected diagrams? HONG LIU: Oh, yeah. Yeah, we will talk about that later. Yeah, we'll talk about that later. Yeah. So here, I'm just talk about the general rule. And then we will talk about the more fine points, how to enumerate all possible diagrams. OK. So when we define the momentum space correlation functions, we extract out this factor. So the reason for to extract this out is from the momentum conservation. So to see this, to see that there is a factor like this, it's very easy. So because from the translation symmetry-- so the same argument that the two-point function, the x1, x2 should only be a function of x1 minus x2. So for the n-point function from the translation symmetry-- so this, you can just take one point to be the reference point. You can subtract, say, all coordinate by the value of xn. And then the last argument becomes 0. So you can choose one point that's a reference point and subtract any other point from it. And that should be equivalent. Because it doesn't matter where you choose the reference point. So here, let's choose the reference point to be xn. So this should be equal to that. So now we do a Fourier transform. So now do a Fourier transform. So you can easily convince yourself-- so when you do a Fourier transform of this object, when you plug this into here, when you plug this into here, and then you get a delta function. So this as a simple exercise for yourself to do. And so if you plug in here, you find that there's always a delta function. So now I record that the momentum space, the Feynman propagator for the free theory in the momentum space, if you say k, then it's given by minus i k squared plus m squared minus i epsilon. So now you have a line, but now it's labeled by momentum k. Line labeled by momentum k. So this k denotes the flow of the direction of the momentum, this arrow. Good. So now, with this rule, you can easily write down what is the-- now you can just translate right. OK. So this is the one point and another point. And the third point-- so let me just-- in order to go to momentum space, let's just mention several points. So yeah, anyway, here is one point and then here is another point. And also, now let's look at the vertex. So at each vertex, we have integration. So suppose we have a vertex like this. So here is x And this is contracted with y1, y2, y3, y4. So let's imagine-- so we have a vertex like this, these four phi's contracted with four different y's. And then at such a vertex, then we will have expression like this, GF 0 x minus y1 GF 0 x minus y2 and GF 0 x minus y3 GF 0 x minus y4. And now again-- so now when you plug in the expression-- so now if you plug in the expression-- so I will not write all the lines here. I think you will just see the basic idea. So this one, if you write down-- let's write each of them in terms of momentum space. So this, if you write in terms of momentum space, then you get a d 4 k1 exponential i k1 times x minus y1, then GF 0 k1-- and similarly for that and for that, for that. And now these have exponential i k1. So these have the exponential i k2. So these have exponential i k3 x. And these have exponential i k4 x. And there's no other x dependence. And then you have a d4x here. So what do you get then is just get a delta function. So you just get a delta function. So when you do that, when you go to momentum space, then you find that this should always be proportional. So that means momentum is conserved at each vertex. So if I draw-- so k1 here, k2, k3-- so I use the same convention for all of them. It doesn't matter whether you draw it outgoing or ingoing. But we draw the same convention for all of them. And then they should have to be-- then they should be proportional to a factor of this. And here, it doesn't matter what is yi. So y1, y2, y3, y4, they can either be some external point or some other internal point. Yes? AUDIENCE: With Gn of t object, the n-point function in space, can you get that by writing expectation values of the momentum field operators, like pi k1, pi x1, pi x2? Or is that-- HONG LIU: Yeah. Yeah, you can. Yeah, you can. AUDIENCE: And how would you do time ordering there? Because then-- HONG LIU: Yeah, you-- so this is strictly defined in terms of the-- yeah. So no, you cannot write that way in the sense that-- not as a whole momentum. Yeah. Yeah, no, you can not that way. Yeah. AUDIENCE: So you to find-- HONG LIU: You really have to find the final expression and then do the-- yeah. Yeah, for the Wightman function, you can, not for the time ordered. For the Wightman function, you can. Yeah. OK, good. So now we can write down the momentum space Feynman rule. And we will not have time to do an example. Let me just write them down, and then we will look at the example next time. So yeah. So here is the rule to compute this Gn p1 pn. We use the following rule. So first, for each external point-- and now you just can associate, say, a momentum-- respect to that momentum. And then you can associate with a factor of 1. And then you draw a line. You still draw a line. But now you associate a momentum with this line. Because when you Fourier transform this line, you will have momentum. And for each propagator, the propagator is given by this. So now you have a momentum. And then this is given by the momentum space. p squared plus m squared minus epsilon. And then now for each vertex, you have a factor of i lambda. And then you impose a momentum conservation. at each vertex. So now, remember, when we do a Fourier transform, each propagator, each line, will have a momentum integration because each one of them is integrated over momentum. So each one of them is associated with a momentum integration. And now, some of those momentum integration can be get rid of by imposing the delta function at the vertex. So you can get rid of some of them by momentum conservation at the vertex using the delta functions. But now you will have, say, a number of them left. So then you need to integrate over each undetermined momentum. So each undetermined momentum you need to integrate with this factor. And then the same thing, you need to divide it by symmetry factor. So symmetry factor is the same in coordinate or in momentum space. So this is the way which you can write down the momentum space expression for each diagram, momentum space expression for each step. Actually, maybe we still have some time. So we have a couple minutes. So we can just do this example. We can just do this example. So this example, let's see what it looks like in momentum space. So in momentum space, we just-- but I erase the diagram. So this diagram looks like this in coordinate space of x1, x2 and x and y. Now when we go to momentum space, the diagram is the same. Just now, we label the momentum. So we have an external line here, a terminal point here. So here, I have a momentum, p1. Here, I have a momentum, p2. So now let's just call, say, this k1, k2. You can assign momentum direction as you want. Say k3. So now we need to impose a momentum conservation. So momentum flow like that. So that means that the p1 should be equal to k1 plus k2 plus k3-- from the moment conservation at this vertex And from momentum conservation at this vertex, they all go away from this vertex. And then in this case, then the p2 will be minus k1 plus k2 plus k3. So that means, actually, p2 should be equal to minus p1. So this makes perfect sense because here preserves momentum, preserves momentum. So for momentum conservation, if you have p1 here, here, you should also have p1. So p2 I drew with a negative sign. And so we have negative p1. Yeah, so using this simplified notation, then we can draw a diagram like this. So here is p. So let's call this k1, k2. And then this k3 can be solved in terms of the k1, k2. So this is just the p minus k1, k2. So this is the p minus k1, k2. So now we can just write down the momentum space expression. So we have minus i lambda. We have two vertex. We have 2 minus lambda squared. Then we have two undetermined momentum, k1, k2. Because this p should be-- external momentum should be fixed. So we have k1 and k2. And then we just write down each propagator. So we just write down each propagator. So we just have minus i k1 squared plus m squared minus i epsilon-- corresponding to this one. And then I have minus i k1, k2 squared plus m squared minus epsilon times minus i p minus k1 k2 squared plus m squared minus epsilon. And then these depend on the k. And then we have two factors, so this times these two external factors. So this is minus-- they have the same momentum. So we just have p squared plus m squared minus epsilon squared. And then we divide by 3 factorial corresponding to the symmetric factors. OK, that's it. So this is the expression for this diagram in momentum space. OK, so let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_10_TimeOrdered_Correlation_Functions_in_Field_Theory.txt | [SQUEAKING] [RUSTLING] [CLICKING] [SIDE CONVERSATIONS] PROFESSOR: OK, let us start. So, last time, we discussed how to calculate such a correlation function, say Gn, in a single particle theory, so using path integral. So the goal is to calculate this time-ordered product correlation function, the vacuum correlation function, of this time-ordered product in this theory. So, last time, we described how to do this using path integral. And we derive the beautiful formula. So the formula is given by the following. It's a Gn is given by the ratio of two path integrals DX(t). So if we call this thing to be x. So the x and the exponential i S xt. And then, the DX(t), just the pure path integral. So, here, I didn't write to the upper limit and lower limit. So it should be understood that the boundary condition for both path integral is that the x-- So it should be from minus infinity-- the time range should be from minus infinity to plus infinity. And then we can choose the value of x at both ends just to be zero. And, yeah, and the slight subtlety is that, when you evaluate S, so there should be the epsilon parameter because we-- in deriving the path integral we have, say we need to give the Hamiltonian a slightly say epsilon part. So that will also affect your action. So your action will also have a small epsilon dependence. And so, at the end of the day, after you have done this calculation, and then you said epsilon goes to 0. So the epsilon goes to 0 at the end of here. So when we write the epsilon, it's always should be assumed that this is a positive number. This is infinitesimal positive number. Good. Any questions on this? So, now, in principle, with this formula, then we can now calculate this quantity. We can calculate this quantity. And we can calculate this n-point function. So, but, in practice, it's actually more convenient, rather than to calculate the Gn, because we often need to know the such correlation function for different n. We often need to know such correlation function for different n. So not only say sometimes we are interested in n equal to 2, sometimes interested in n equal to 3, 4, et cetera. And then, there is a nice trick to-- you can try to calculate, then say, in general, is to use this technique called generating functional, which we started talking about at the end of last lecture. And so, the basic idea of the generating functional can be easily understood by consider just this one-dimensional example. Say, if you are interested in doing an integral like this, xn. And, if you are interested in this integral for different value of integer n, then it's more convenient to consider such an integral, Za given by-- So the reason I put the i here is just, yeah, I didn't specify the range of x. And if x is from minus infinity to plus infinity, it's not easy to-- yeah. Yeah, put i here, just so that this integral can be defined. Depending on circumstances, you don't have to put the i. Say if x is from 0 to infinity, then I can just put, say, minus lambda a, with lambda to be a positive number. And then, that's fine. So the benefit of considering this integral is that-- oh, no, no, no, not the-- sorry, I should-- x, yeah, xa. Yeah. So the benefit of doing this integral is that if you notice, say, if you take a derivative with respect to a, and then, that will bring down a factor of x. So it's a-- so if you take a derivative with a, you bring down a factor of x, and you take the derivative twice with a, then you bring down a factor of x square. And if you do it n times, and then you bring down a factor of x to the power n. So, essentially, this Zn then can be written as partial n Za, then partial a, you derivative n times. And then you set a equal to 0 because, in the end, we want-- in this integral, there's no exponential piece. And you set that equal to 0, then you get rid of the exponential piece. Then you have factor of xn. So there's still-- yeah. So I still need to put i n here. So if you know how to compute the Za, and then you only need to take derivatives to do Zn. So taking derivatives is much easier than doing integrals. So, in other words, we can also write it, expand Za in terms of power series in a. And then, the Zn would be the coefficient. So Zn would be the coefficient for a to the power n. Yeah. Good? Yeah, so we call this Za the generating function. So now we can use the similar idea to generalize to this case. And, now, here, this is-- here it's just one-dimensional integral. Here we have a functional integral. So now this is the function. So, essentially, you just generalize this generating function into a generating functional. And so, we can consider the following object. So in order to compute this, then we can consider the analog of this a is we consider object called J. J now will depend on t. Yeah, J is a function. J is a function because x now becomes a function. And so, introduce a path integral like this. So, from now on, I will not-- when I don't write the range, then it's always-- you should always keep this in mind-- the range will always from minus infinity to plus infinity with the boundary value to be 0. And then, we can just say, I have standard xt. Then I can add the analog of this. So, remember, again, you think t is just an index. And then you just, essentially, imagine if you have multiple x. And then you just sum over them. So sum over them, in this case, just corresponding to an integral. And so, we just have a piece like this. So this would be the generalization of that equation. And so, integration over t just can be imagined as a sum. Imagine if you have multiple x and multiple a, and then you need to sum over them. And then, yeah, so this is just a generalization of that. Do you have any questions on this? So now, similarly, now if we take a functional derivative with respect to Jt, then we can bring down a factor of xt. So, more explicitly, so let me just remind you, that the rule of doing functional derivative. So if you have a function, delta Jt prime, with respect to the derivative of Jt, and then that just gives you a delta function. That just give you a delta function. So now let's look at-- with this rule, then let's just look at the delta, delta Jt on the ZJ. So and then, yeah. So, now, since this is Jt, let's just, for convenience, let's just put this to be t prime. Let's put it to t prime. And then, when you take derivative with respect to Jt, and then you directly take derivative with this guy. And then, when you take the derivative, the delta function will get rid of this integral. And then you just have xt. And then you will just get i DX(t) exponential i S. And, yeah, sorry, you will get DX(t). And then you have a factor of xt. So let me just write Dx-- again, to save, you just add Dx. And you have xt from taking derivative of Jt. And then exponential i S plus i Jx. So let me just save some effort. Yeah, you should keep in mind that this Dt is, actually, I'm just using a simplified notation. Good. Any questions on this? So, now, if I also introduce Z0, defined to be ZJ equal to 0. So this is just the original integral. So Z0 is essentially the downstairs. So Z0 is just the Dx exponential i S. So when J equal to 0, and then we just have this. So, essentially, this is the downstairs here. So we also-- let's also introduce this definition. And then we then find-- so, by comparing the two, then we immediately conclude that the one point function of x hat t0 is just given by 1 over i, because this i there, and 1 over Z0, then delta J-- delta ZJ, delta Jt. And then you set Jt equal to 0. So, similarly, here, if you set a equal to 0, so you take derivative, and then you bring down a factor of xt. And then you set J equal to 0. And then you just have that integral with xt there. And then you get this one point function. So, now, you can just immediately generalize. So this n-point function, Gn then can be written as, you just take one-- now you have take n derivatives. Now you have a factor of 1 over i to the power n. Again, you divide it by Z0 because you always need to divide by this piece. And then you just take Z derivative n times. So you take Jt1, Jtn. So this time variable should match with the time variable in the original definition. And then, after you take the derivative, you set J equal to 0. So, and then, this gives you the n-point function. If you know how to compute this ZJ, and, again, you only need to do derivatives. And then it's much simpler. Then you only need to do the path integral once, and then you just doing the derivatives. Any questions on this? So, now, if you keep in mind of this, we can also rewrite this expression as the following. We can also rewrite-- alternatively, we can also write ZJ divided by Z0 as following 0 and the time ordered exponential i, dt, Jt, xt, 0. So, as I mentioned before, that in that formula, in principle, x can be anything. x can be anything. So, now, in this path integral, you can just imagine you separate this term-- the sum of the two exponential, you can just write it as a product. And then you have the exponential i S, and the exponential i, this piece. And then we treat that piece to be x. And then, that gives you this formula. If you treat that piece to be x, it give you this formula. Then you ask, what is the meaning when we have a time ordering of this exponential? So the meaning is that, imagine you expand this in power series. Imagine expand in power series of x. And then you just order each term in the power series. You can pick, because now each term is a polynomial, you can time-order them. You can time-order them. So, now, this object-- so I can also write this object. So when you expand this object, and then you-- the first time is just 1. And then, the next term just Dt, Jt, then the one-point function. And then, under the n'th term, yeah, et cetera. So you can, when you expand it, you can just write from n equal to infinity, 0 to infinity, i to the power n, n factorial. Now you have n integrals, t1, tn. Then you have times, you have Gn, t1, tn, and the Jt1, Jtn. So this is the typical-- the n'th term is like that. So when you expand this to n'th power, you group all the x together. Because J just C number-- integration and the J are C numbers. You can take them out. And then this part is just the x between the zeros and then you have this factor of J. Yes. AUDIENCE: Where did the n factorial come from? PROFESSOR: Oh, just when you expand the exponential, there's always n factorial. AUDIENCE: Oh, right, sorry. PROFESSOR: Yeah. Good. Any questions on this? Yes. AUDIENCE: When you separate out the exponentials into the e to the i S and then the i integral. PROFESSOR: Yeah. AUDIENCE: Is that S, and the other term, like operator. So you'd have to have another contribution from the commutators? PROFESSOR: Sorry, say it again? AUDIENCE: So, S, in that case, it functions of an an operator. And then the other one is also like an operator. So wouldn't that introduce commentators? PROFESSOR: No, no, no. Because, in the path integral, they're just ordinary functions. Right? AUDIENCE: Oh. PROFESSOR: In the path integrals, they're ordinary functions. So, in the path integral, they're always just ordinary functions. But then, when we rewrite them in terms of the operator form and then so-- the left-hand side, So the x, they're just ordinary functions. But in the right-hand side, I'm now writing it in terms of the operator form. And now, indeed, now the ordinary matters, Now the ordering matters. Yeah. So I'm just using this form-- yeah, so, in this formula, the right-hand side is just the ordinary functions. But the left-hand side involving some operator sandwiched between the ground state. Yeah. Other questions? Yes. AUDIENCE: So in that formula in the middle, where Gn is expressed as the function derivative of Z-- PROFESSOR: Yeah. AUDIENCE: Does it matter what order I take the derivatives? PROFESSOR: No, that doesn't matter. Because, again, J is ordinary functions. Yes. Yeah, because this is just the path integral. J is just ordinary functions. This is just some functionals of J. You can just take arbitrary derivatives. Also, you notice, on the left-hand side, so Gn is a function of t1 and tn. So Gn, as a function of t1 and tn is actually completely symmetric. Because, under this time ordering, it doesn't matter how you order them. Because they're just ordered by time ordering anyway. So it doesn't matter how I write the ordering here. So Gn is a symmetric function of t1 and tn. So you can see it here. So, here, because all the derivative commute, and then this is a symmetric function of t1 and tn. Yeah. Other questions? Good? OK. So, in the future, we often just computing this object. We just often computing this object. And then, that will tell us-- then that will give us the generating functional of the correlation functions. Then we can just obtain correlation functions by taking derivatives. So now let's look at the explicit example to illustrate how this works. So let's just consider simple example, a harmonic oscillator. So, almost always, a harmonic oscillator is a good example. Good? So, in this case, so, again, we look at this object. But now S is-- so we always now take from minus infinity to plus infinity. We have dt. Then you have 1/2. So let's take this to be my Lagrangian. So let me call this omega 0. So this is considered the essential harmonic oscillator. So I take the m equal to 1. And I consider this Jt. Yeah. We are interested in computing this object. So, also, I should mention, in practice, so you always interested in xt. Say, when you calculate n-point functions, even though the value of t-- say, suppose you want to calculate the Gn. So Gn, you have n values of t. And so, outside that n values of t, you can just take the J to be-- yeah. So we can always take J to go to 0 at the plus minus infinity. Yeah, so that helps your integral to converge. Yeah, this is just a side remark. Good? So, now, first, to compute this object or this object, we need to first understand what is this S epsilon. We need to understand what is this s epsilon. So, remember, so, for the harmonic oscillator, the Hamiltonian is p squared divided by 2m plus 1/2 omega 0 squared x squared. And so, here is the Lagrangian, and this is the Hamiltonian. They are related by the Legendre transform. So now, in order to do this H, go to H minus i epsilon. And then, now this become p squared divided by 2m 1 minus i epsilon, then plus 1/2 omega 0 squared, x squared, 1 minus i epsilon. So you multiply the both term by 1 minus i epsilon. And then, to obtain what is the corresponding S for this, you just do a lot of Legendre transform back into a Lagrangian. So then you find that the L epsilon, which you do corresponding to the Legendre transform of this-- yeah, so this is a trivial exercise, which we can do. Then you find that this give, gets 1/2 x squared 1 plus i epsilon. And then, yeah, so, this part, essentially, does not change, 1/2 omega 0 squared x squared 1 minus i epsilon. So that's what you get. Essentially, when you invert it, when you do the Legendre transform to go to p, to x, somehow this becomes from 1 minus i epsilon become 1 plus i epsilon. Yeah, you can easily check yourself because you did do an inversion. And so, now let's write this into a more convenient form. So to write it in a more convenient form, we always-- remember, we always treated the x as a two x's sandwiched by some differential operator. So we can do integration by parts. So we can do integration by part to write it as minus 1/2 x partial t squared plus omega 0 squared. Then minus i epsilon omega squared plus i epsilon partial t square x. And then plus total derivative. So the total derivative always vanished because we always-- yeah. Just, we always impose boundary conditions. So that at t equal to plus minus infinity, they go to 0. And so, now let's look at this object. So let's look at this object, the epsilon dependence. So omega squared is just a positive number. Multiply epsilon, it's still a positive number. And epsilon infinitesimal. So we can just still call it i epsilon. So, now, the partial t square acting on epsilon. So partial t square is a negative definite operator. Because, remember, whenever you do a Fourier transform on x, so partial t-- a single factor gives you i omega. Then, if you have a partial t square, then give you minus omega squared. So partial t square is a negative definite operator. So that means that this is also a negative number times i epsilon. And then, that means we can just write the whole thing just as 1/2 x partial t square plus omega 0 square then minus i epsilon x. Good? Is this clear? Yeah, because just anything, any positives in multiplying epsilon still give you epsilon. Because it's just a small number. It doesn't matter. So now we can write this S xt, now we can write the S epsilon xt then in the following form in the minus 1/2. Let me write as dt, dt prime as we wrote before. Then xt Kt, t prime and xt prime. And then, Kt, t prime is just given by delta t minus t prime, say partial t prime squared minus omega 0 squared minus epsilon, or plus, yeah. So, again, we just introduce a lot of t prime. And then I introduce a delta function. And now we have a matrix form. Now, again, this action has a matrix structure. So now, this S depend on epsilon. Now S depend on epsilon. Good? OK? Good? So now we can evaluate-- now we can now ready to evaluate this path integral. Now we are ready to evaluate the path integral. So let's first look at Z0. And let's first look at Z0. Z0 is just the Gaussian integral we already said before. So the Z0 is the Gaussian integral. So I will be schematic. Yeah, this x dot K dot x. So this is a shorthand notation to denote this two integral. I think it's positive i. Oh, yeah. Yeah, minus sign. I have a minus sign here. So it's minus i. Yeah. Good? So, and this, as we said before, this is just given by some constant and determinant K. This is just some constant determinant K. Yes? AUDIENCE: Sorry. [INAUDIBLE] PROFESSOR: OK. AUDIENCE: Why did you have to go to the Hamiltonian to put that 1 minus i epsilon? PROFESSOR: Right. It's because that's our previous rule. Because we say, in order to derive this, we use the cheek to take the H, go to-- yeah, H 1 minus epsilon. Yeah. So we want to know how this translate into the behavior in the action. Other questions? Good? So this is just the same as we discussed before just given by some constant and determinant K. As we said before, that the C is typically divergent. Determinant the K, so typically divergent. But we will see, it doesn't matter. So now we will see, it doesn't-- so, previously, we said, this will not matter. But now we will see it explicitly. So now let's look at the ZJ. So ZJ is the same integral, x dot K dot x. But, now, with this additional term. So let me just write, again, in the simplified notation, as J dot x. So you view the integration as a huge sum of vector-- yeah, vector product. So if this is a finite dimensional integral, you say, I know how to do this. We know how to do this because this is just the Gaussian with a linear piece. So we can just write down the answer. So let's just-- yeah, the rule is that you just treat it as a finite dimensional integral and write the answer for the finite dimensional integral. And then you translate the language in terms of this functional case. So we can write it-- so, again, this would be C divided by delta K. Yeah. So let me just remind you. Maybe just let me just do a little bit slower. So let me just remind you the standard story for such a Gaussian integral. So if you have dx1, dxn, exponential minus 1/2 xi, Aij, xj plus Ji, xi. So if you have an integral like this, we know how to compute this integral. We can just compute-- include the xi into here by completing the square. And then, what you get is the following. After you complete the square, you just get the original Gaussian integral. And so, what you get is you get the 2 pi D over 2, or your previous case, delta a. And then, the results you get from the complete integral is Ji A minus 1 Ij, Jj. So when you complete the square, you get the additional term. That's what you get. And this is coming from doing the Gaussian integral. So now we just have an infinite dimensional version of this integral. And we can just write down the result immediately. We can just write down the result immediately. So we just copy that thing. So we have C. So this C will be the exact the same as that C, because this just comes from doing a Gaussian integral as if J is not there. So we have the same C. We have the same delta k. Then, according to the rule there, up to the i, which is you have to put in, then we get 1/2 i. Then you have-- then we should have J k minus 1 J. So this is essentially that. You take the inverse of A. So here we get that. And this, if we translate back into this kind of function language, so this just gives you C delta K exponential i divided by 2. Then you have dt, dt prime. Then you have Jt, K minus 1, t, again, t prime. So K minus 1 should be understood as the inverse of this K and Jt prime. So the K minus 1 is defined as follows. So the K is-- so you just, again, is the function generalization of the matrix case. So you just have t prime, K, t, t prime, and K minus 1, t prime, t double prime should be equal to delta t minus t double prime. So that's how you define the K, K minus 1. And this is like a matrix product. Just now you treat the t prime-- yeah, t prime, you sum over that, and then, yeah. So this is just like you have kmn, k minus n, and k equal to delta mk. You just translate the n into the integral. And the t is corresponding to m. And the t double prime corresponding to K. And the delta function corresponding to that. So that's how we define the K minus 1. And so, this is the result for ZJ. So, now, the physical object is this object is the DJ divided by Z0. Because you get the expectation value, we always need to divide by Z0. So now, if we take the ratio, so now I can erase this. So if we take the ratio, ZJ divided by Z0, we find all these factor canceled. So this factor cancels with that factor. So it doesn't matter. So we just get exponential. So, let me, again, using this shorthand notation, i over 2, J K minus 1 J. So this is the physical quantity. And when we expand this in powers of J, then we get the correlation-- then the coefficient of J give you correlation functions. Or we can just take derivatives. Yes. AUDIENCE: Yeah, so last time I thought you said the C and the determinant of K can be infinite. So is it OK to divide infinity by infinity and just say it's 1 in this case? PROFESSOR: No, it's not one, because they are actually-- yeah, as you do in your Pset, say, if you have a free particle, that ratio is actually-- you can calculate to be a finite number. AUDIENCE: Oh. PROFESSOR: Even though their ratio is actually, both are infinite. But the ratio is actually a finite number. Yeah. Yeah, same thing with the harmonic oscillator. Yeah. Yeah, but the key thing is that we actually don't need to worry about them. They just cancel. Yeah. Other questions? So now, this is our final result for the harmonic oscillator. And except we still have to invert this K. We still have to invert this K. But, in fact-- but, before we do that, first we can see what is this-- whether there's any physical interpretation for this K minus 1. So let's just consider the following situation. So let's consider a two-point function. So, first, from here, you can immediately see, the one-point function is given by what, the vacuum one-point function of x? So can you see what is the vacuum one-point function for x without doing calculation? AUDIENCE: 0? PROFESSOR: Yes, 0. So the reason it's 0, it says because if you get one-point function, you take one derivative with J. So when you take one derivative is J, because of here, it's J square. You will bring down a factor of J. Then, when you set the J equal to 0, and then that will be 0. So the one-point function automatically is 0. And that's consistent with our expectation. In the harmonic oscillator, the one-point function of x is always 0 because x involve a or a dagger. When you sandwiched between two zeros, it's just 0. But now, so notice, the non-vanishing one is the two-point function. So now let's consider the two-point function. So two-point function by definition should be the Feynman function because this is a time-ordered product. So the two-point function, by definition, is the Feynman function. It's the G2. And so, this is given by just this expand-- yeah, just 1 over Z0 i square. You take ZJ, two derivatives. delta Jt, delta Jt prime, and then you take J equal to 0. So the-- yeah. So the two-point function is a Feynman propagator [INAUDIBLE].. So, here, we can just see what we get from here. So when you take two derivatives on this, you take two derivative on J again. So the first derivative on J you bring down a factor of K times J. And your second derivative, we can do two things. You can act on the exponential again. And then you can act-- or you can act on the J factor which you bring down the first time. But you have to act on the factor of J you bring the first time, because you have any free J left. And when you set J equal to 0, there will be equal to 0. So the both derivative should act on this J, which come together. So, here, then you get minus i, essentially, K minus 1 t and t prime. Just take these two derivatives, then you get K minus 1. So now, we learned something nice is that this K minus 1, it's actually the Feynman propagator. We discussed before, the Feynman function, we discussed before. Yeah, the harmonic oscillator version of the Feynman function. So, previously, we defined for the field theory. So this is the harmonic oscillator version of the Feynman function. So we find that K minus 1 t, t prime is just equal to-- actually, i GF t, t prime. So you find that this is just given by GF. So, and then, we learned that the ZJ divided by Z0 is just equal to exponential now minus 1/2 J GF J. So now, it's just GF. It's just everything determined by this GF. So now we have a consistency check. Now we have a consistency check because, previously, we have discussed that the GF should satisfy certain differential equation. And here we have a differential equation for K. So K minus 1 also satisfy a differential equation. It just satisfy this equation. And so, now, if you plug in there and into here, OK if by definition, so from this equation. So let me call this equation star, star, star, and this equation star. So from equation star and star, star, and we find that K minus 1 should satisfy the following equation of the partial t squared minus omega 0 squared plus omega 0 minus i epsilon GF. Yeah. K minus 1 should be equal to the delta t minus t prime. So let me just make sure I get the sign correct. Yeah. So, yeah. Yeah, I think this is right. And now, if you plug in this expression, and then you find that this equation is actually exactly our definition of the Feynman propagator before. Here we don't have spatial derivatives. But if you look at it, in particular this i epsilon is precisely the i epsilon previously we need to use to define the Feynman propagator. And now we find that the epsilon prescription, which we previously used as a trick to define the Feynman propagator is actually recovered by this procedure-- recovered by that procedure of H goes to, yeah, 1 minus i epsilon. Just everything is consistent. Just now you precisely recover that procedure. Any questions on this? Yes. AUDIENCE: So I guess in that expression right there. PROFESSOR: Yeah. AUDIENCE: If J is a function, if you just integrate it over t and t prime? Exponential negative 1/2 J dot GF dot J? PROFESSOR: Oh, you mean this expansion? AUDIENCE: Yeah. PROFESSOR: Yeah. So if I write it explicitly, you just have, yeah. Let me write it explicit since this is a very important equation. Yeah, so this is just dt, dt prime, Jt, GF, t, t prime, and Jt prime. Yeah. Good. So this is all consistent. So this i epsilon prescription, which we did here, automatically recovers the i epsilon prescription in the definition of the Feynman function we defined before. So it's very nice. And, in particular, so in momentum space, if you find the momentum space, GF omega, if you go to the momentum space, and then you find just equal to i omega squared minus omega 0 squared plus i epsilon. So you see this is actually the previous. So if you compare with our previous expression for the Feynman function, when you set k square to 0 and replace the omega 0 square by m square, then that's exactly the one we derived before. Good. So now we can try to find the n-point functions. So now we can work out the all n-point functions. So now you can immediately conclude from the way we do the one-point function that all n-point functions, so general n for general odd n Gn is always 0. Gn is always 0. So the reason is the following. When we carry out this procedure to take the derivatives, so because the J is always paired in this exponential. So when you take one derivative, you're bringing down another J. So because, in the end, we set J equal to 0. So you have to get rid of all these J which you bring down from the exponential. And then that means, n has to be even. So if n is odd, then there's always one J left. And when you set J equal to 0, and then will be 0. So this is also consistent with your experience from harmonic oscillator because, in the harmonic oscillator, if you have all the number of x, then you have all the number of a and a dagger. If you have all the number of a and a dagger together, there's no way when they're sandwiched between 0 and they can annihilate each other, and you will-- yeah. So you will always get 0. And so, here, we get it. Yeah. So now, for the even n, there's also a simple answer. If I have an n-point function-- so let me just write down Gn, say equal to 0 to t, say x hat t1, tn. So for even n, then you see that, again, because in order for this not to be 0, then all the x has to be paired. Then all the x has to be paired. So, in this case, we just sum-- so, in this case, the answer is just sum over all possible contractions between x ti's. So by contraction we mean, so if I have xti, xtj, we say there's a contraction between them. And so, that's just defined to be GF xi xj. Oh, no. ti, tj. So you just pair all of them. So each pair is a contraction. You just sum over all possible contractions, pair of them, and each pairing gives you a Feynman function. So each pairing gives you a Feynman function. And so, this is actually, in the early days of quantum field theory when you don't have a path integral, to show this is actually non-trivial. Because imagine if you do this time ordered product. There are many, many pieces. Because if you have an n-point function, then you have to write down all possible orderings between them. But, in the end, it's a very simple result. You just sum of all possible contractions. And each pair is the time ordered. And so, these the early days without path integral is actually a highly non-trivial result. And so, this was first proved by Wick. So this is called the Wick theorem. But now, we see, it's actually, if you know the path integral, then it's a trivial consequence of that. The path integral is actually quadratic in J. It's quadratic in J. So, yeah, so give you an example. So let's look at four-point functions. So if you look at four-point functions. So I don't even have to write that thing down. Let's just draw four dots below to the four points. And then, I just sum over all pairing between them. And each pairing will give me a Feynman propagator. So 1, I compare 1 and a 2 or 3 and 4. And I can also pair 1 and 3 and 2 and 4. And I can also pair 1 with 4 and the 2 with 3. Oh, 2-- 2 is here. So 2 with 3. I can also do that pairing. And so, if I write it in terms of the expressions, then I have GF t1, t2-- let me just, t1, t2, GF, t3, t4 plus GF, t1, t3, GF, t2, t4. Then plus the GF, t1, t4 plus times GF, t2, t3. So you just sum over all such pairings. And each pairing gives you a GF. Yes. AUDIENCE: Sorry. I thought because of the time of ordering, you can't choose your pairing. There's only one way to pair. PROFESSOR: What do you mean? AUDIENCE: Why is it ok to do different pairings even though it's-- things are time ordered? PROFESSOR: Oh, what do you mean, you cannot do the pairing? AUDIENCE: The x's are time ordered. PROFESSOR: Yeah. AUDIENCE: The x's. PROFESSOR: Yeah. AUDIENCE: So wouldn't you have to pair them in that order? PROFESSOR: No. So that's the key. You just, somehow, this is the consequence of the theorem. If you want to just write down the orderings, then it's actually rather complicated. Yeah. But, somehow, the magic is that once you, say, write everything explicitly, you do all the everything, in the end you can group everything just in terms of product of the Feynman functions. Yeah. Yes. AUDIENCE: So for the harmonic oscillator, we know that at a given point in time, the position is Gaussian. And so, that would mean that the n-point function for all the t's equal to each other should be non-zero only for n equal to 1 or 2 but not for n greater, right? Because only the first two moments of a Gaussian are non-zero. That seems inconsistent with the prescription over here. PROFESSOR: Sorry. Why are you saying that? AUDIENCE: At a given time, the-- PROFESSOR: No, but all these time are different. AUDIENCE: Right, but if I were to take the times to be equal. PROFESSOR: OK, yeah. AUDIENCE: Then so, for example, the variance of the particle position would be your two-point function evaluated at t and-- PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, this is just the x to the power n. AUDIENCE: Right. PROFESSOR: Yeah, the x to the power n is non-zero for even power. AUDIENCE: Yes, but then, so then, in this formula, that'd be GF t comma t. PROFESSOR: Yeah. AUDIENCE: Which is non-zero. PROFESSOR: Yeah. AUDIENCE: But-- PROFESSOR: Yeah, but the GF is non-zero. AUDIENCE: Right, but then, what I'm saying is, the four-point function then would also give you something non-zero. PROFESSOR: Yeah, it is non-zero. Yeah, it's very consistent. AUDIENCE: But the fourth moment of the Gaussian is 0. PROFESSOR: No. The fourth moment of Gaussian certainly is non-zero. Yeah, x4, yeah, Gaussian you have x4. That's certainly non-zero. AUDIENCE: But the means is 0, so all cumulants higher than 2 are zero for a Gaussian? PROFESSOR: No, no, no. For the Gaussian, it is all how equivalent can be expressed in terms of the sigma and in terms of the two points. Yeah, here is just exactly what you show here. Yeah. Just everything can be expressed in terms of the two-point moment. Yeah. So if you set all the t to be 0, so this is GF is the same. So this is GF. Essentially, the only difference between them. So this is GF. Yeah, essentially you just add them together just, read GF squared. And, yeah. Yeah, just do this exercise yourself, and you will see it's the same. Yeah. Good. So now, with this preparation in quantum mechanics, now we can immediately just move in field theory. So now we can immediately move to field theory. So now you can say time ordered functions in field theory. So, before we do that, any other questions? Good. So, again, to go to field theory, now the only thing we need is just copy notations. You just need to change the notations. Just remember how you-- yeah, just replace the appropriate dynamic variable in quantum mechanics by the appropriate dynamic variables in field theory. And then that's it. So let's write down. So now we have, consider, in field theory, suppose we consider this n-point function. And now, xn-- now xn denote the space time point now. So let me call the-- previously we call the vacuum for the field theory to be omega in the interacting theory. So now you can see that this quantity, phi x1 and the phi xn, let's look at this endpoint function between the vacuum. So now, again, if we call this thing to be X, So now we can immediately write down the answer in field theory, just the D phi. Now you just replace the DX by D phi. Then you'll, again, you just have this X, capital X. And then you have now i S phi then divided by D phi without i S phi. And another boundary condition is that the phi, when you do this integral, again, t goes to infinity to minus infinity to infinity. And now tx will-- should go to 0 for both t plus minus infinity. So that's the analog of a previous simple x equal to 0. And because it's here, remember, the x here is just the labels. And so, for each variable we require, it's go to 0 at t plus minus infinity. You could do 0. And also, normally, in field theory, we assume, yeah, in order for the integral to have well-defined behavior, et cetera. We often just assume also go to 0 in the spatial infinity. Yeah, this is just often for convenience. And physically, this also means that infinitely far away, and we assume that the field are not excited. Yeah, we are interested only in the, yeah, physical excitations in finite region. Anyway, so this is the condition we impose when doing the path integral in the field theory case. So now, again, you can introduce a generating functional. You can just, again, just copy notation, copy the previous formula. And by changing notation. So now the generating functional ZJ is defined to be D phi exponential i S phi. And then, now you just add-- now you integrate over all space time points and phi x. Again, we introduce a J. But now you integrate over all space time points. And then, similarly, the ZJ divided by Z0, so this we call-- yeah, the Z0, again, is just the integral without x, any x. So divided by Z0 is equal to now the omega and time-ordered product of the exponential i. So, again, just given by that. And, again, this time-ordered product, time-ordered exponential should be understood, as you expanded this in power series, when you expand it in power series, then you have powers of phi. Then you just order those phis in terms of time ordering. And then, again, the integration and J can be pulled outside this expectation value. Good? And the Z0 is just the same as ZJ equal to 0, so without any J. So this is, again, just immediately give you a general prescription for calculating n-point function in any theory, in any scalar theory. So, here, I don't even have to specify the precise form of the action. It just carry through. This also applies to interacting theory. So this formula also applies to interacting theory. And, yeah, it's very general. So this is the power, say, of this path integral formalism. So once you understand in the quantum mechanics case, go to quantum field theory is automatic. Good. So now let's look at how to treat-- how to do this thing-- calculate this thing in field theory. So, first, let's just look at the free field theory. And then, before we look at the interacting case, let's just look at the free field theory. So free field theory is almost identical, again, almost identical to the harmonic oscillator case because harmonic oscillator is also-- because the free theory will be also a quadratic Gaussian integral. So everything will be very similar to the harmonic oscillator case. We just need to, again, replace some notations. So now, let's consider the free field case. Consider this minus 1/2 partial mu phi, partial mu phi, minus 1/2 m square phi square without any nonlinear-- without any cubic or higher power term. And, again, the S can be written-- so S, in this case-- So, here, 0 means the free theory because later we will do interacting theory. So s, in this case, again, you can write it as-- you can integration by part. You can write it as d4x, d4x prime, then phi x, Kx, x prime, and then phi x prime. And now, the K is given by d squared plus m squared minus epsilon. Again, this epsilon comes from that thing. You just, if you work through, you just, yeah, it just minus epsilon. And then delta 4x minus x prime. And, yeah. Good. So, again, we can just-- shorthand notation, so this has a minus 1/2 phi dot K dot phi. Just now, the only difference is that you change it from integration of dt, integration of the full space time. Yeah, everything else is the same. And. Now, this is ZJ. ZJ just given by D phi exponential, again, just given by a Gaussian integral. So let me see whether I can squeeze in a Gaussian integral, phi dot K dot phi, then plus i J dot phi. So we can, again, write this in the simplified notation as i J dot phi. So, again, you just-- and then you find this-- you get some other C. Again, you get determinant K. And then you get exponential i over 2 J K1 J. So everything is exactly the same. And then, this is the same as Z0, this part. And then, again, they cancel. Again, they cancel. So, again, we find-- so, in this case, again, we find that K minus 1 x, x prime equal to i GF x, x prime. So this i epsilon prescription, so it's precisely, yeah, if you check the definition, so this i epsilon prescription m square goes to m squared minus epsilon, is also the precisely what we did before for the Feynman function. So, now, the final answer, the ZJ divide by Z0, yeah, I almost don't want to copy it. It's just exactly the same as this. You just replace the dt integral by d4x x integral. And the d4x integral. So, yeah, let me just write it. Exponential minus 1/2, then J GF J. Any questions on this? And, again, if you calculate n-point function, then you just get the identical structure as here. Just the identical structure here. The only difference is that you replace GF, t1, t2 by x1, x2, x3, x4, et cetera. Everything is just identical. So, to save time, I will not copy them again. So, do you have-- yeah. AUDIENCE: So in this particular theory, do we have the condition that phi goes to 0 when t goes to infinity? PROFESSOR: Sorry? AUDIENCE: In this particular theory, Do. We have the condition that phi goes to 0? PROFESSOR: Yeah, so that ensures, when you do integration by parts, everything is 0. AUDIENCE: We have the solution for phi? PROFESSOR: Sorry? AUDIENCE: We know the solution for phi in terms of x and t right, and it's like a plane wave? PROFESSOR: No, this is the boundary condition in your path integral. Yeah, this is the boundary condition in your path integral. We're not talking about this. Yeah, so this is-- yeah, phi just the-- yeah, here, I didn't write down any explicit solution for phi. Sorry. What plane wave we are talking about? AUDIENCE: Oh, I was thinking about the solution for all phi field theory that we did a few lectures ago. PROFESSOR: Right. AUDIENCE: So that [INAUDIBLE]? PROFESSOR: Yeah, no. No, that will also go to 0. Yeah, no, no, no. That's an operator equation. That's an operator. Here, it's just the field in your path integral. So, here, you just integrate over all possible. There, when we write that, that's an operator equation with a and a dagger there. So here is just the ordinary function of space time, which we impose the boundary condition in the path integral. Yeah. And, similarly, in here, there's Wick theorem, just everything just goes through. Everything just goes through. Good. Any other questions? So, for the last few minutes, then we can venture a little bit into the interacting case. Was there any question? So now we can venture into interactions. So now we have our master formula. And now we can treat what happens in the interaction case. So now we can go to interacting theory. So in the interacting theory, let's just consider, say, the case which L equal to L0, then you pass some polynomial. For example, the simplest case is just lambda we discussed before to the phi 4. So plus some higher power term. So, for simplicity, I will just-- but what we will do, we will not depend on details form. Let me just write LI. Imagine you have some interacting terms. So, in this particular case, the LI is equal to that. But we can consider the more general case. Just you have some extra, yeah, something depend on phi. And, similarly, your Hamiltonian will also be the free theory Hamiltonian plus a interacting one. And the interacting one-- so let's, for simplicity, then this LI, as in this case, only depend on phi, does not depend on the, say, the time derivative of phi. So if it does not depend on the time derivative of phi, and then, essentially, LI, HI is essentially just minus d3x LI. When you do the Legendre transform to go from, say, L to H, don't change this term if it does not contain time derivative. So, yeah. So, and then, there's a very simple relation between this interacting term in the Lagrangian and also in the Hamiltonian. And this is a free theory Hamiltonian. And now we will write our total action in terms of the free theory part and the interacting part. And the interacting part is just given by d4 x LI. It's also the same as minus dt HI. Good? So just to set it up. And now we want to calculate. Again, we want to calculate this n-point function. Which did I erase it? Again, we want to calculate this n-point function. So this is the object we are interested in. And, again, we can just consider generating functional. And we can consider generating functional. Yeah. Before, actually, we do that, let's just consider-- yeah, let's just consider this n-point function. Yeah, we actually have one minute. We cannot do really much. Let me just tell you the basic idea. And then we will elaborate next time. So, now, let's imagine we want to compute this object. Again, we just use that formula, this Gn is equal to D phi X exponential i S divided by D phi exponential i S. So now, what I will do is, I will-- now, my S-- so the idea is the following. Now, this part, integrals are not doable. Because once you have these non-polynomial terms, or non-quadratic terms, we don't know how to do the integral. We don't know how to do this integral even for one-dimensional integral. So not to mention the path integral. We also don't know how to do such integral for a harmonic oscillator. But, yeah, same thing. We don't know how to do it for field theory. So, as we said before, even though we don't know how to do this integral, we can treat this perturbatively. So we treat this as a main term. And then we treat the lambda small, then we try to expand the power series of lambda. And now we can write the path integral as the following. D phi x, then we add S 0, then i SI, and then divided by D phi, exponential i S0 plus i SI. And, now, what we are going to do is, we just expand this term in power series. When we expanded this in power series, and then, essentially, we are reducing it to the path integral of the free theory. So, essentially, the upstairs and downstairs can be imagined as we are doing the free theory-- now this is a free theory vacuum. We can just view this as an integrand of exponential i0. So the upstairs just become-- so this just become the-- yeah, t x exponential i SI, now in the free theory. And then, downstairs also become 0 t exponential i SI in the free theory. And now we can just evaluate, in the free theory, such kind of correlation functions. And we evaluate them by expanding the SI in power series. And then just-- so everything becomes just doing some Taylor series expansion in the inside of the integral and then become very simple. You don't need any fancy stuff. First year your calculus you can do. First year calculus you can do. So when you expand that stuff, still you get something very complicated. And then you can use diagrammatic rules to simplify them. And that's called the Feynman diagrams. And now we can-- yeah, next time we will talk about Feynman diagrams to simplify such kind of expansion in power series. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_5_Complex_Scalar_Field_Theory_and_AntiParticle.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So last time, we more or less finished discussing quantization of the free scalar field theory. And then we showed that the excitations of that theory essentially give rise to relativistic particles. And so we also discussed various other things-- so structure of the Hilbert space, conserved quantities, et cetera. So at the end, we talked about how to understand the conserved charge at the quantum level. So classically-- so we have a, say, symmetric transformation-- means that if you take phi a to some phi a prime-- phi a prime, normally, we write it in the infinitesimal way-- introduce some parameter a-- some parameter epsilon. And I can also have multiple symmetries. So for each transformation-- say I have epsilon. So alpha labels different transformations, and then-- so this is the transformation, which can depend on phi a. Also depends on its derivatives, et cetera. And then if you have such a symmetric transformation we discussed before, then that generates a conserved current, and that is labeled by alpha. Alpha just denotes different conserved current. And then yeah, so it is conserved. So this is the classical statement. Oh, yeah, also Noether's theorem tells us how to write down such a current explicitly. So we can just write it as-- the current can be written explicitly. So k mu is the transformation of the Lagrangian density. Suppose under this transformation the Lagrangian density transform as, say, alpha partial mu, k mu alpha. So alpha, it doesn't matter. Alpha just label different symmetries-- the index. It doesn't matter whether it's up or down. And then the zeroth component of the charge-- zeroth component of the current when you integrate over all space then give rise to a conserved charge. So this is conserved charge times the-- yeah. So now, let's discuss-- so when you go to quantum level-- so when you quantize the theory, and this just becomes quantum operator, and the whole current becomes a quantum operator. And then Q is also a quantum operator. So the quantum level-- and now Q alpha-- because it's conserved, so this becomes time-independent operator. We see explicitly in the case of the, say, Hamiltonian dens-- Hamiltonian and the momentum. So we showed explicitly they actually don't depend on time. But at the quantum mechanical level, actually, this Q can play a more important role. In fact, it can be viewed as a generator of symmetries. So we can show-- we will show the following statement-- that if you consider so epsilon-- so consider such a quantity-- can you see the commutator of Q with phi a. So now, let's continue over to here. Maybe just let me write it here. And that generates minus epsilon alpha f alpha a. So the commutator of this Q alpha epsilon alpha with phi a essentially generates this transformation, essentially, generate this infinitesimal transformation. So that's why we call the Q alpha also generator of symmetries. So this is the statement I'm going to show very soon. So before I do that, do you have any questions? Yes? AUDIENCE: In the conserved current equation, how can f have an alpha in the upper index when J has an alpha in the lower index, its not meaningful? PROFESSOR: Yeah. Just alpha-- don't worry about the location of alpha index. Alpha just label a different current and just went-- yeah, here, we don't worry about its notation. I just put it up and down just for convenience. Other questions? Yes? AUDIENCE: Can you [INAUDIBLE] positions is that a sum over alpha? PROFESSOR: Sorry? AUDIENCE: In the proposed, var L is epsilon alpha partial to mu alpha, is that a sum over all alpha? PROFESSOR: Yeah, that's sum over all alphas. So any repeated indices is sum. So if you only have one-- so for example, if you have one epsilon-- of course, there is no sum, but if you have two different charges, and then this is sum. Other questions? So we are going to show this statement. And so this statement is very easy to show in the general case. So let me call this equation star. So let's first consider the case-- first, let's consider the sub case that the-- let's first suppose f a alpha-- just this transformation of phi a does not contain the time derivative of phi a. And also the zeroth component of the k is equal to 0. Let's consider a first spec-- a more limited case. So this more limited case actually covers many, many examples. It actually covers a majority of examples. For example, if you look at the translation symmetry, which gives rise to the stress tensor-- so this will-- so this applies to the case I mu-- T mu i. So this corresponding to the spatial-- so this is the conserved current corresponding to the spatial. So this is the current for spatial translations. When we do spatial translations, then of course, this f given by the spatial derivatives, and then it does not involve time derivative. And in that case, the k also does not involve in-- k 0 component's 0. Yeah, just I try to remind myself of what you worked in your pset. And so for this example, i is the counterpart-- is the alpha here. So for each direction, you have a translation, and so this-- so the alpha-- of course, i is the alpha here and the mu is the J mu here. And also this applies to the case which-- for all internal symmetries. By internal symmetries, we mean the transformations don't involve space-time coordinates. For example, you did in your pset the complex scalar field which you can just rotate by phase. And that's called the internal transformation because that transformation does not involve spacetime coordinates. In contrast, when we find the stress tensor, we will do a spacetime translation, and that does involve spacetime. So this corresponding to the spacetime transformations-- spacetime symmetries. And this is-- the phase is the internal symmetries. So for all internal symmetries, you don't-- the transformation will not contain the time derivative. So in this case, then it's easy because in this case, then we just have the one-- then the zeroth component of J mu. J just given by-- the first term by definition is just the canonical momentum conjugate to the a. So the first factor, partial L partial partial mu phi a-- when you set mu equal to 0-- so that's just the derivative with respect to the time derivative of phi a, and that, by definition, is just the canonical momentum density conjugate to phi a. And then you have f alpha a-- f a alpha. Good? So let me just make one comment on this expression. So this expression, of course, classically, you can write whatever way you want. But quantum mechanically, there's a very important subtlety because this typically will involve phi, and then this is momentum, and they don't necessarily commute. So this is the field theory version of so-called operator ordering ambiguity, which is already in quantum mechanics. In quantum mechanics, when you go from classical-- even it's already in the non-relativistic quantum mechanics. When you go to classical mechanics to quantum mechanics, and then there's an issue how you order operators when you have x and p at the same time. So here, you also have potential operator ordering ambiguity. You have to pay attention. Often, such kind of ambiguities can be resolved by physical considerations. So we will see some examples in a little bit. So now, let's just imagine there's some ordering we have chosen, and now let's look at this commutator. So we can just write down the definition. So this is just integrating over all spatial direction epsilon alpha, and then we have pi a f a alpha phi a. Yeah, let me just-- sorry-- here, let me call phi b, because the phi a-- now, they are differ-- so these two indices are summed. So let me call them b, and then you have phi a here. So now, you can use commutation relation between the momentum density and the phi phi. So we discussed in general that the-- so we discussed the scalar case. But in general, you have this commutation relation of-- which is delta 3-- say if this is the x, and this is x prime, and then this is x minus x prime, and delta ab. So each one is only have a canonical conjugate momentum with its own momentum. If they have different fields, then of course yeah. So you have that. And now, if you use this-- so here is x. Yeah, let me call this prime and this to be x. Sorry, I should label more explicitly x here, and then here would be x. And so here is x, and here will be x prime. Sorry, I didn't leave myself enough space. Let me just rewrite it. So this gives me d3 x prime epsilon alpha. Then, I will have this pi a, then x prime f alpha a x prime, and then this commutator with-- yeah, not covariant. Sorry, alpha is upstairs. b and then phi x. And yeah, so we can evaluate this, say, for example, all at t equal to 0. So I suppress the t. And then yeah-- and now, you can just trivially use this commutation relation, and then this just gives you minus i epsilon alpha f alpha a fa alpha x-- evaluate at x. So that just confirms that expression. So this is not satisfied when you have time translation. So for time translation-- so first, any questions on this in this simple case? The idea is very simple. Essentially just the first term proportional to the momentum, and the momentum-- a conjugate to this, and then you just take that. Yes? AUDIENCE: Did we not get a term with the f alpha a and phi because of our restriction? PROFESSOR: That's right. Yeah, so if there's no partial zero-- if there's no time derivative here, it means there's no momentum here. If there's no momentum here, then this will commute with this one. Yeah, no matter if it's in here, it always commute. Yeah, a very good question. Yeah, I forgot to mention this point. Other the questions? OK, good. So now, the special case-- we said if you have time translation-- when you have time translation-- for example, one example is the-- yeah, the Hamiltonian density, H is equal to pi a partial T phi a minus L. So the L here-- for the time translation L is the analog of the k, and the partial T phi A is the F here, and then here is the pi a. So here you also have time derivative. And so this by itself is also a pi. And then L may also contain time derivative. So L is the k0 here. So in this case, the k0 is non-zero, which is equal to L. And so in this case, the conserved charge is just H. So in this case, you-- our general argument here don't apply because now, you have pi here also, and you may have a time derivative here also so the story is more complicated. But this case, we know trivially it's true because it's by definition H acting on any field. It's minus i partial t phi a. And this is exactly the time translation-- transformation of fields. So in this special case, this equation is also satisfied. Got any questions on this? So this is just follow from the Heisenberg equation itself. And now, you should just understand this equation. S-- so H now can be considered as a symmetry generator for time translation symmetry, and when we act on the field and generate the time translation. Time translation just infinitesimally just a time derivative on phi. Good? So these are infinitesimal transformations, but now starting from here, you can actually obtain finite transformations on the field also using Q. So you can generate finite transformations. So let's consider, say, U lambda alpha-- so lambda alpha are just some parameters. Let's consider this quantity. Now I exponentiate this Q. So Q is the operator, and lambda is just some numbers-- lambda is some constant parameters. And again, the alpha is summed. Yes? AUDIENCE: So I understand how this follows from the Heisenberg equation of motion, but here, how would you evaluate this commutator not knowing the form of L mu because you said it can depend on time derivatives as well. PROFESSOR: Yeah, here, just this general argument don't apply anymore. So in this case, you really have to-- you can check this explicitly-- you can just write down the explicit value-- explicit form of h, and then work it out explicitly. But you don't have to do it because we know this has to be true just by self-consistency because we solved the Heisenberg equa-- yeah, because when we quantize it, essentially, we are solving the Heisenberg equation. Yeah, but you can check it explicitly yourself that this is true. Good? So here-- so lambda alpha here just some collection of finite transformation parameters. So this can be used to generate the finite transformations. So if you act-- if u lambda alpha phi-- phi a and u lambda alpha dagger, and then you just get phi a prime. And now, this is a finite transformation-- a finite symmetry transformation of phi a. And when you expand-- with lambda, alpha is infinitesimal, and then you can just expand this exponential, then the leading term becomes this one-- the leading term becomes this one. Yeah, so when lambda alpha equal to epsilon alpha goes to 0, and this equation star, and then star, star reduces to star as the leading nontrivial order when you expand the exponential. So this can be done in general just-- so the reason this is true-- without even doing any calculation, you can just imagine you can build this U-- this finite transformation by infinite number of infinitesimal transformations. And the infinite-- so when you build them up, and then you will just generate the finite transformation. So you already worked out the example in your pset-- so you already worked out the example in your pset that-- if you can see the Ht minus I P i x i acting on phi 0 minus i H t plus i P i x i. And then this just gives you phi t x i-- t x-- just give you phi t x. And so this is-- so here, H t minus P i x i is our lambda alpha Q alpha here. So alpha here runs 0 in i direction. So in the 0 direction, the t now is just some parameter, and the lambda alpha equal to t and minus x i. And then Q alpha just-- and Q alpha here is equal to-- yeah, t x i. Let me just do t x i. And the Q alpha for here is just H minus P i. Yeah. Good? So this is just an example of that. This is just an example of that. And also you can consider Lorentz transformation. So another example is a Lorentz transformation. So let's consider-- so in your pset, you already-- so in the Lorentz transformation, the conserved charge is-- actually, there are six of them. M mu nu going from 0 to 3. So there are all together six conserved charges associated with Lorentz transformation-- three rotations and three boosts. And then so when you write the finite transformation, and then you will have six parameters. You have six parameters. And omega mu mu-- and because mu mu is antisymmetric, which you worked out in your pset, so this parameter should also be antisymmetric. And so there are finite angles. Oh, sorry, not finite angles-- finite angles and boosts. So essentially, they give the parameters for the finite rotation angles and the boost. And so that generates a Lorentzian transformation. That generates a Lorentz transformation. And so for example, you-- say if you act on u lambda on phi x, and u lambda-- you can check dagger on phi x-- what you get is phi prime x. And phi prime x is also the same as phi lambda 1 on x. So here, lambda-- so here, lambda 1 x is the Lorentz transformation of x mu with parameter determined by-- yeah, just omega mu nu. So this capital lambda is just the corresponding Lorentz transformation. Is this clear? Yes? AUDIENCE: So you assumed that you can represent, I guess, the operators that perform these transformations like this. But don't you have to think about if it follows in the algebra such that you can represent it like this, or like the group of transformations? PROFESSOR: Yeah, indeed. So here, it's not-- here, it's not said. I assume. Here, I'm just directly telling you the result. Just when you build-- yeah, so if you want to generate the-- yeah, I'm just directly-- already directly telling you the answer. AUDIENCE: So it's the composition of-- infinitesmal transformation? PROFESSOR: Exactly. If you do the composition of the infinitesimal transformation, you will get that. Yeah, the reason I write down this explicitly-- immediately, because you should have already seen this in quantum mechanics because that's how you do the angular momentum. In quantum mechanics, when you do the angular momentum, the way-- it's the same thing. Yeah, here, we just generalize it to general symmetries. Yes? AUDIENCE: So it was some similar question, but just like-- so this is like a general procedure if you have a symmetry to get the representation on your Hilbert space, just exponentiate it like this? But like in QM, we usually just postulate that there is a unitary operator, and then demand that the group law holds. So don't we need to show that those things gives-- that this preserves the group law, or is that not-- PROFESSOR: Yeah, so this is guaranteed. So yeah, I didn't go into that, but this is guaranteed. Essentially, that's how you-- yeah, this is how you build the algebra, because the algebra is also-- the finite transformation is also built up by infinitesimal transformations. And so essentially guaranteed-- yeah, so there is a theory of groups and the algebra behind this. Yes, I didn't go into that. So some of you may already seen a little bit of this when you talk about angular momentum. So in quantum mechanics, so it's exactly the same story there. Yes? AUDIENCE: So I know that rotations correspond to angular momentum being conserved, but what is the intuition for the thing that's preserved in boosts? PROFESSOR: Yeah, it's a-- yeah, indeed, there's no very intuitive way to think about it. Also it's not that useful. You can define a boost charge. So if you consider all the spatial components, and if you consider all the edge here, then these are the angular momentum operator, and this is your standard angular momentum operator, which you have seen. And if you look at 0 i, and then that's what sometimes we call boost charge. It generates the boost. But in terms of the conserved number, we don't use it in the essential way-- say, in other contexts. AUDIENCE: Thank you. PROFESSOR: Yes? AUDIENCE: But to that question, you can think of it like a center-of-mass velocity [INAUDIBLE]. PROFESSOR: Yeah, it just does not give you something additional. Just normally, when you deal with physical problems-- say angular momentum, conservation of momentum, conservation of energy, conservation-- it's already enough. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, you normally don't need to use a boost charge conservation to solve your problem. Yeah, so that's why we don't see this object very often, say when you discuss special relativity or other things. But here, it plays a very important role that in quantum field theory, this is the operator which generates the boost symmetry. If you want to do a boost in your Hilbert space, that's the operator to do it. Other questions? Good. Yeah, so in your pset-- yeah, another example is the phase rotation. Let me just very quickly write here. And so another example is the phase rotation if you have a complex scalar field. So there, we show there's a Noether charge corresponding to the phase rotation, there's a Noether current corresponding to rotating the phase to the field. That's a symmetry. And so that will generate the charge Q corresponding to the first two. And then if you have alpha Q-- and then when you act on phi, and then you can just check explicitly that will generate the phase rotation in phi. And we're generating phase rotation phi. Any questions on this? Good. Yeah, so let's conclude our discussion of the real scalar field. And now, we can move to a new part. So before we move to new part, your last chance to ask questions about this part. OK, great. So now, let's very quickly talk about-- so let's very quickly talk about complex scalar field. And you can see the complex scalar field, we get something new. We get the concept of antiparticle. So now, let's consider the following Lagrangian density. Again, we only consider the quadratic-- the simplest-- only quadratic Lagrangian, but now the phi is complex. Phi is complex. So complex is also the same as 2 real. So you can decompose phi into its real part and imaginary part. And if you plug them in here, and then you find you just get two separate pieces, and one is the real scalar field for the real part and the one is the free scalar field for the imaginary part, because each one-- yeah, if you write phi equal to a plus ib, and here, it just become a squared plus b squared, here, it's also a plus b squared. But there's a reason we write-- yeah, so essentially, we just get two copies of our previous theory. We just get two copies of previous theory. So you say, oh, then it's trivial. Why should we just consider this-- we just get two copies of previous theory? But this advantage-- but there's some concept-- there's something conceptually new when we write in this form. there's advantage to write in this form rather than write as two separate identical scalar field theory. It's precisely in this form-- this phase symmetry is manifest. If I just write it as two separate real scalar field, then this phase symmetry is not manifest. And so here, in this form, there's a phase symmetry manifest. So phi goes to exponential i alpha phi. And so you see this-- you immediately see that symmetry. So this-- when you write it in terms of real and imaginary part, this corresponding to a rotation between the real and imaginary part. And yeah-- good? And the equation of motion is the same because it's just the-- because you can easily find the equation of motion. It's the same. And yeah, because just two copies of your previous theory. Of course, the equation of motion should be the same. And the Lagrangian density-- the momentum density for phi then becomes partial 0 phi star, and the partial phi star becomes partial 0 phi. So we treat-- we can treat phi and the phi star as two independent fields. And then the canonical momentum-- say if you take derivative with partial 0 phi, then you get the partial 0 phi star and vice versa. So the complex conjugate momentum like that. So we can now write down-- again, we can just write down the most general solutions. So the basis of solutions is the same. Same set of solutions as before. We just have u k x equal to minus i omega k t plus i k x, and then you have u k x star. Because it's the same equation, so the basis of the equation, of course, is also the same as before. Yes? AUDIENCE: So if the field is an operator, doesn't it-- if it's an observable, doesn't it have to be Hermitian? PROFESSOR: No, the fields-- it depends. So yeah, indeed if it's something directly observable, it has to be Hermitian, but that can have two observables. For example, I can combine them into a real variable. Yeah, just not necessarily the phi itself have to be directly observable. It's just a field. Yeah, the field itself don't have to always be observable by itself. AUDIENCE: So then the conjugate momentum also is not-- PROFESSOR: Yeah. AUDIENCE: The momentum is not our three momentum? PROFESSOR: Yeah, this is not the momentum. This is just the canonical momentum for the canonical quantization. Yeah, this is not the spacetime momentum. AUDIENCE: Then is the adjoint of the field operator the same thing as the star? PROFESSOR: Yeah, here, we are talking about the-- here, we are talking about the classical theory. And then when we go to the quantum, indeed, we will replace the star by dagger. So now, we can just write down the most general solutions to this equation. Now, I think we can just write down the most general solution to this equation. So we just-- so there's one slight difference from before-- almost the same-- identical as before except with one difference. u k-- but here, previously, we have a k dagger. We have a k star, say, classically. Let me just first write classically. So previously, we have this. Previously, why we have a k star here when we have a real scalar field? It's because the field has to be real, and then we have always add to its complex conjugate. But now, the difference is that now phi is a complex, and the phi is complex. And then we no longer have the real condition. That means here, I can choose another arbitrary constant here. So this is the arbitrary constant. So by convention, let me just put a star here. It doesn't matter. It's just the name you call it. So now, the most general solution-- now, you have two sets. You have a, which is complex number. Then you have another complex number, which is b. Now, you have two sets of complex numbers. And it makes sense because when you have a complex variable, you double the degrees of freedom, and your integration constant also doubled. So b k independent of a. So now, you have two sets of variables. So you can get the phi star just by taking the complex conjugate of it-- just by taking the complex conjugate of it. And then the full set of integration constant just-- you have a k-- you have a k star, you have b k, and you have b k star. So this is the classical story. So now, for quantum, we just do exactly as we did before. You just promoted all those into operating equations. So the-- so you just call this dagger. Just call this dag-- just view every one of them as operators. And now, this just becomes constant operators. And now, here, you also just call them to be dagger. And the-- yeah. So now, equal time commutation-- still, we have to impose the equal time commutation relation. So now, we have to impose the equal time commutation relations again. So this all should be simple. So we should have phi, phi dagger. They can be-- they're considered as independent operators, so they should commute with each other. They're all field variables, and the phi actually commutes with itself. So this is all evaluated at different spatial-- yeah, just to save time, I will not write this expression, but all this should be evaluated at the same time by different spatial locations such as all these equal to 0. Except the only thing which is non-zero is the phi with its own canonical momentum. So everything else will be 0. The only thing non-zero is phi, which is canonical momentum. Again, it should be given by i delta 3 x minus x prime. And then the similar thing for the phi dagger. So again, I save effort. So that's what you should impose. So now, if you plug those things in, then you can find the commutation relation between a and a dagger. Again, you find-- and similarly, with b, you just get two sides of-- two infinite family of harmonic oscillators. Previously, we have one family, and now you get two families, and all other commutators vanish. So this part is boring. You can almost guess everything without doing any calculations. You essentially guessed everything without doing any calculation. So now, again, you can write the Hamiltonian in terms of a k and b k. Just will be as you would guess them to be. Just the harmonic oscillator corresponding to a k, harmonic oscillator corresponding to b k. And then the ground state-- again, it's the state annihilated by both of them. So this is the vacuum state. And now, you can act a k dagger on 0 and b k dagger on 0, et cetera. So now, we have two kinds of particles. So now, we have two kinds of particles generated by a k dagger and b k dagger now. So now you have two kinds of particles respectively, and both of them have the on-shell condition. Both particles have the same mass. They satisfy the P squared equal to m squared -- minus m squared. So because they all come from the same equation. So now, the question is, how do we tell them apart? Now, we have two types of particles. They have the same mass, and we-- yeah, just by definition, they don't have any-- they have same mass. They also have the same spin because they're all spin 0 scalar particles. There's no directions. And yeah, the question is, how do we tell them apart? So here is this U(1) symmetry becomes useful. Here is this U(1) symmetry becomes useful. So this U(1) so this phase rotation is a symmetry. And mathematically, this is called the U(1) transformation. So this is normally called U(1) symmetry. U(1) is just a mathematical term for phase rotation-- mathematically, it's a U(1) group. It's a mathematical term for-- so the way we take them apart is by looking at their conserved-- look at their quantum numbers. If we look at two particles, then how do we tell them apart? We look at their quantum numbers. And they're quantum numbers, so far, they look almost the same because their mass is the same. And by definition, they are scalar particles, and so they have 0-- they're spin 0-- they don't have spin. And so the only other conserved number-- yeah, if you want to talk about quantum number, their quantum number has got to be conserved, because otherwise, if it changes with time, it doesn't matter-- it doesn't make sense to use that number to label a particle. But we get one more conserved number corresponding to this symmetry. So this U(1) symmetry-- so Noether's theorem-- yeah, from the Noether's theorem, so this is U(1) symmetry-- then tell us there's a conserved charge. So if you work it out-- you work out the loss theorem, then you find the corresponding conserved charge, which you should have already worked out. So let me just write it classically. So I just write them as phi star. So classically, they have the following form. So at the quantum level, this becomes operators. But here, there's an operator-- but now, here, there's an operator of-- when we go to the quantum level, now there's an operator ordering ambiguity now. You see, here, the phi don't commute with its own conjugate momentum. And so here, we have phi multiply its conjugate momentum. And if we change the order, and then we will result a delta function. And that will give rise to infinite nu-- yeah. So at the quantum level, different ordering-- differ by some infinite constant. So you can take phi, and you can-- yeah, if you change the ordering, then you get a delta function. But delta function, again, is evaluated because these two is evaluated at the same x, and then you get the delta function evaluated at the 0. So essentially, you get infinite constants. But we can fix this ambiguity by requiring your vacuum state-- your lowest energy state should have charge 0, because by definition, when you're at the lowest energy, there's nothing there. It's a vacuum. By requiring the lowest energy state, have charge 0. So that uniquely fixes whatever infinite-- the ordering. So by requiring this equal to 0, and then you can show that the Q-- so now, you can essentially-- the Q has the following form. So you can already-- you may be able to guess the answer of a k dagger a k, then minus b k dagger b k. That's what you get. So you plug those expressions in. You plug those expressions at this time derivative. Time derivative gives you pi. You just plug those expressions in, and again, you find-- because this is conserved, you find all these time dependents cancel, et cetera. And then you get these two expressions up to some constant. And then this condition requires that that constant must be 0. Yeah, when you write in this form. You see this annihlates the Q because the a is on the right-hand side of a dagger. So this automatically annihilates the vacuum. So essentially, this is just occupation number for a and this occupation number for b k. So we essentially get Q-- essentially, the occupation number for a k and the occupation number for b k. So occupation number, I just write as N. Yes? AUDIENCE: I still don't see how this fixes anything, because if you switch them, you still get the same issue-- PROFESSOR: Huh? AUDIENCE: For this line of argument. If you permute them, you still get the infinite cost, so I'm confused why this is considered a [INAUDIBLE].. PROFESSOR: No, the way you permit them, then you no longer violate the vacuum, so that constant will cancel. So only this form with a is sitting on the right-hand side of a dagger, this will violate the vacuum. AUDIENCE: Right, but-- OK. I guess because earlier, the line of argument was, if you permute them, you would get a delta function, because you would get the same thing here, right? If you permit them, do you also-- PROFESSOR: No, no, no, no, no, no, no. Here, there is ambiguity. Here, I don't know how I should order them. Whether I should put phi to the right of pi or put pi to the left-- to the right of phi. I'm not sure how should I do. Here, there's no prescription, and here, there's no prescription. But if I impose this condition, then that fully-- then that tells you when I impose this condition, then Q must be of this form. Of course, when you permute them, it's the same thing. When you permute them, you don't change the operator. You don't change the operator. Yeah, but if you do, say-- let's do this order. If you do this order, what you get is this expression with an infinite constant. And then this condition tells you somehow you have to change the order here so that constant is 0. Yeah, when you write it in this form-- when you write it in a and b, and that constant has to be 0. Yes? AUDIENCE: So does the ordering ambiguity have any physical consequences, because besides just things that can be subtracted off? PROFESSOR: Yeah, it's-- yeah, this is a good question. Which-- it's hard to say. So most of the time, you can fix them by some physical invariant. By phys-- yeah, as here, you can fix it based on physical requirement, and then you don't have to worry about it anymore. But the ordering ambiguity requires you try to find such a physical requirement. Yeah, because otherwise, what ordering would you use? And the process of finding that requirement by itself is understanding the physics. Yes? AUDIENCE: Should we have done that with the-- when you've got the infinite energy in the previous lecture, because you know how it required the ground state to have 0 energy and then get some different ordering. PROFESSOR: Right. No, in that case, we don't have ordering ambiguity. So in that case, because we just have pi square itself plus phi squared, and plus phi squared. There's no ordering ambiguity here. And so whatever you get is whatever you get. And if you get infinite constant, then you get infinite constant. You don't have freedom. Other questions? Yes? AUDIENCE: So I don't also see how phi and phi dagger-- it's clear that they commute. How do you know that there's no coupling between the fields-- or when there is coupling, how do you represent it in-- PROFESSOR: Oh, this is a definition. You say where this come from? AUDIENCE: Yeah. Like, why do we know for sure that it's 0? PROFESSOR: Yeah, this is just the definition of-- this is the same. This may have nothing to do with field theory. Only this equation have to do with field theory. This have nothing to do with field theory. This is just quantum mechanics that different degrees-- different degrees freedom, they commute with each other. Yeah, just x1 commute with x2 in quantum mechanics. Yeah, phi-- here, just phi and phi dagger just is analog of x1 and x2 there. Other questions? AUDIENCE: Can every operator commutes do we still have quantum mechanics? PROFESSOR: No, it's different from that. Here, it just means that the-- here, you should view them as the variables in the configuration space. And just the-- in quantum mechanics, the variables in configuration space, they always commute with each other. Good. So now, the key thing is that here is the minus sign. So that means when you look at the commutator between Q and a, you get the a. And if you look at the commutator a with b k, you actually get minus b k. You get-- yeah, let me just do the dagger because this is used to create particle. So if you just calculate the commutator, you just find that. So that means if you look at the states defined by one particle state corresponding to a and one particle state corresponding to b, then both are eigenvectors of Q, but here with eigenvalue 1, but this one is eigenvalue minus 1. So it means they actually have opposite charge. So k and-- oh, by the way-- so let me explain a little bit of this notation. This k does not mean the magnitude of k. So this k means the-- because normally, our convention is that the four vector-- say the x mu we always just write it as x. The p mu we just write as p. So here, you should view this as a four vector. It's a four vector. It just is a shorthand notation. So k and the k bar, then they have opposite charge. Have the same mass, the same spin 0, but opposite charge. So by using this Q, we can now distinguish these two particles. So normally, because such a particle-- because in this case, which-- they are-- all quantum numbers are the same, except they have opposite charge. We call them particle and antiparticle. So this we call-- create a particle. So here, we say they create an antiparticle. So here, let me just make a remark without proving it. So in relativistic quantum field theory, you can show that any particle has an antiparticle. In any quantum field-- any relativistic quantum field theory, any particle has an antiparticle. In the case of the real scalar field, you say we only have one particle. In that case, the antiparticle itself. Good. Do you have any questions? Yes? AUDIENCE: Can we derive the fact that Q is considered just by the number in the a and the number of b is considered separately? Because I'm having trouble seeing why conservation of Q is a non-trivial statement. PROFESSOR: Yeah, the conservation of Q, it follows from the symmetry. AUDIENCE: But couldn't I just-- like, the number of a particles and the number of b particles are kind of separately conserved. So couldn'te I just write off that Q was conserved from the very beginning? PROFESSOR: Yeah, it's-- no, actually, the number of a particle and number of b particle, they're not a separately conserved. They can actually-- in principle, they can annihilate. Yeah, here, in free they cannot annihilate. But if I can see the slightly more complicated theory-- say, if they are allowed to interact, and then they can actually annihilate and then disappear. And so in that case-- but this formalism still works. This symmetry is still there. So that's why the symmetry is powerful. Also applies to the case with interactions. Yeah, indeed, what you said is correct in the free theory. Yes? AUDIENCE: So does this mean particles and antiparticles-- like, the difference between them is conserved, so they work in pairs? PROFESSOR: Yeah, essentially, their property-- it means their property almost identical except they have opposite charge. Yeah, they have essentially-- just like a real world, we have electron, we have matter, we have anti-matter. They have essentially identical properties. And if the world is made of anti-matter, everything still will behave the same, and yeah-- except just the charge is different. AUDIENCE: And the difference between the number of antiparticles and particles is conserved? PROFESSOR: It's conserved. Yeah, it's conserved. Ryan. AUDIENCE: So I guess this is the electric charge, or is this some other charge? PROFESSOR: I think some other charge. So in the case we will go to when we talk about electrons-- so electrons involving a more complicated formalism. We have to introduce particles with spin. And in that case, we also have electron and anti-electron. And in that case, it's-- the story is parallel. AUDIENCE: So this does not necessarily involve E&M or things reppelling. PROFESSOR: No. Yeah, but this is the analog of that. So the charge here, they can be some other kind of charge. Does not have to be -- yeah. AUDIENCE: So how can you get something that's physical that's conserved from a symmetry that's not really physical? Like, this is not like you can translate in space or translate in time. This is-- I don't understand. I don't know how to think about it. PROFESSOR: Yeah, that's why we call it internal symmetry. So for things that we don't have good intuition about, you just invent the new name for it-- [LAUGHTER] --and once you get used to this new name, you say, oh, I understand it. Yeah, it is more or less getting used to it. And indeed, at first sight, it's not very intuitive because it's not something you can-- yeah, from your normal experiences, et cetera. But yeah, this is the-- but this is-- at the fundamental level, it's not that different from energy conservation. It's just some symmetry, and then leads to some conserved quantities. Yes? AUDIENCE: If we have another conserved charge, like Q and now electric charge and-- which ones do we call, I guess, particle/antiparticle pairs? Does that make sense? PROFESSOR: Yeah, that's right. They can, in principle, have multiple charges, but the particle and antiparticle, they're always opposite. Yeah, in terms of charges they're always opposite-- whatever charges you have. AUDIENCE: What if there's two different charges that each one can have, like a set of four things? Is it a different language? PROFESSOR: Yeah, I think-- yeah, you can have them-- but for particle and antiparticle, they're always opposite. Yeah, they can have multiple charges, but they're always opposite. Other questions? Good. So this concludes our discussion of the complex field. So we have a few minutes-- then let's do to the next topic. So hopefully-- so now, let's talk about the concept called the propagators. So again, let's recall in non-relativistic quantum mechanics-- So this position vector plays a key role. So this is the eigenvector of this x, which is the position operator. So again, I will-- yeah, let me just put a hat here to emphasize this is the operator. And so this is the eigenvector of x. And because this plays a key role-- because we need to use this to define the wave function. And then the amplitude of the wave function-- the square gives us the probability of a particle at location x. So this is a crucial quantity. So this language-- normally, we talk about the Schrodinger picture. You start with psi x at t equal to 0, and then you try to find-- yeah. Or let me say, you start at psi x-- say, t prime-- at some t prime, you want to find psi x at some t. So this is the question of quantum mechanics. So if you know the wave function at the initial time and you want to find the wave function at some future time. So now let's imagine we consider this question-- Heisenberg picture. So in the Schrödinger picture, there's just one eigenvector-- yeah, just one set of eigenvectors for x for different eigenvalues. But for the Heisenberg picture now, your position operator now depends on time. So the position operator at different times, they are different. So now, if you want to talk about the position eigenvector, now you need the label. So now, the position eigenvector-- it's a t label. This t does not mean that this vector evolves with time. It's just this is the label. This means that this is the eigenvector of position vector at time t. This just tells you this is the eigenvector at time t. And now, the question of-- this question of-- this is the question of quantum mechanics-- then can be reformulated into a different equation. So in this story, the question of quantum mechanics becomes the following-- what is the value of this object? Suppose you are in the eigenstate-- you are the eigenvector-- you are in the position eigenvector-- you are in the position eigenstate at time t prime with the value x prime. And what's the amplitude for you to go to location x at time t? And I remember, this does not involve time evolution. This is just-- because one is labeled in different eigenvector at different time. So if you know this object, this is called-- this will be normally denoted G. G-- this is called the propagator. This is called the propagator. So if you know the propagator of a quantum system, then you have full knowledge of that system, so you already solved that system. Why? Because say let's imagine we want to find psi t x wave function at time t, and then by definition, this is given by this. So this is the Heisenberg picture definition. It's the psi overlap with the position vector at time t. And now, this is-- now, we can insert a complete set of state. I just insert a complete set here. Just this is the identity. And then this object is just G. So you just get G x t x prime t prime, then psi t prime x prime. So if you know-- so yeah, so this just gave you the wave function at t prime. So now, if you load this-- given the initial wave function, if you know the propagator, you just need to do an integral. Then, you find the wave function here. So the full knowledge of your quantum mechanical system essentially is encoded in this propagator. It's encoded in this propagator. So yeah, now, we want to talk about what is the analog of this object, say, in relativistic quantum field theory? In quantum field theory, how do we define a propagator, and what is the analog of the position eigenstate, say, in quantum field theory? And we will discuss next time. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_25_Elementary_Processes_in_QED_II.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So let's start. So let me just clarify one question, which was asked last time. So when we calculate the total cross-section for, say, for this e plus e minus to mu plus, mu minus, then we find that there's a funny fact -- yeah, let me just write down-- so then there's a factor like this. m prime square E squared divided by m square e square and then times something else. OK, so times something else. And so there's funny feature that this factor seems to blow up when the E is equal to m. So if you decrease E to the value of m-- and then this seemingly blows up, OK. But of course this never happens in the real situation because this mass is much larger than this mass. So the upstairs will go to 0 before this blows up. So that's why people never cared about it, OK, including myself. But you can ask the question-- suppose the electron is more massive than the muon and then you will reach the 0s of this first before you reach the 0 of this. And then you will see something blows up. OK, then it's curious. There's a very simple mathematical reason for this. And the reason is that the cross section is defined-- remember, the cross section is defined to be defined to be the divided 1 over the flux. And the flux is proportional to 1 -- it is the density times the velocity. And the density is the same. It's just one particle per volume. And then the velocity goes to 0. This precisely comes when the velocity goes to 0 because when the velocity becomes very, very slow, then your flux become very slow and small. Then this becomes big. So mathematically, that's the reason why this becomes very big, OK, when the velocity goes to 0. But still it is a little bit funny, I should admit. So when the velocity equal to 0, of course, it's unphysical because when the velocity is 0, they just never scatter, OK. So this divergence is never an issue. But you can ask the question why physically somehow when you decrease the v and somehow the cross section should become bigger and bigger? And the mathematically it's due to the flux just due to the way the cross section is defined. But I also mentioned before that the cross section is supposed to measure the effective area of the interaction. And then why should that depend on the velocity, OK? And so this aspect that I don't have a very good explanation-- but this is the mathematical reason for it. OK, good. Yeah, so this is just clarify the question over the last time. OK, so then we talk about crossing symmetry. So let's consider two process. One is the e plus e minus goes to mu plus mu minus as we just discussed with the Feynman diagram going like this. So this is the-- OK, so this is the e minus e plus. This is mu minus mu plus, OK. So there's also another process. Let's consider another process, which is e minus-- with mu minus going to e minus plus mu minus. And the Feynman diagram for this is given by this. OK, so this is the e minus and mu minus are at the initial state now. And this is e minus. And this is mu minus. OK, again, only one diagram contributes. OK. So if you compare a and b, you can see that b-- the diagram for b is essentially the diagram for a if you view it sideways. OK, if you view it sideways-- and then you see this goes to that. And then this becomes final state. And the final state coming out e plus essentially becomes e minus. And this just becomes that, OK. So the difference between these two is that the e plus-- so e plus in initial state of a then goes to e minus in final state of b, OK, in final state of B. Similarly, the mu plus in the final state of a then become the mu minus in the initial state of b. Yeah, so essentially, mu plus-- essentially, you take the e plus going to the other side become e minus and then take-- mu plus going to this side become mu minus, OK. So essentially you just exchange that. So now let's label the quantum numbers. Let's call this p1, r1. So this is-- call it p2, r2. And call this one-- Call this k1, s1. So these are the polarization. And this is k2, s2. And now similarly I label here by p1, r1. Label this one by p2, r2 bar. So this is for the antiparticle. And so this one would be k1, s1. And then this will be k2, s2 bar, OK. So now if we look at this map, so it's like in the process A, we take this to be initial state, p2, r2. If we replace it by minus k2, s2-- OK, minus k2 means that the-- yeah, so this one e plus now becomes the final state. So this is the-- so from the momentum direction, this is the going in. And this will be going out. So we need to change the sign. OK, we to change the sign and, similarly, the k2 for s2 for k2, s2 bar and then go to minus p2, r2. If you make this placement, then we get the process b, OK. You get the process b. So you just make the replacement. And then the process in a will go through the process in b. So suppose you forget about those polarizations. If we consider how we're talking about the scalar particles-- so there's no polarization. And then this just trivially-- so for scalar, then we just have trivially-- We can just do the replacement. You can trivially see that the amplitude for the process a with p1 minus k2 and the k1 minus p2 then will be just equal to the amplitude for the process b with p1, p2, k1, k2. OK, so we just rename your momentum of your process a. And then you will just get essentially the amplitude for two p because the only thing you need to do is just exchange the name of the momentum, OK. So then we'll be trivially the same. But for fermions, we should worry about-- also look at the wave function associated with external legs. OK, so in a-- let me just write down those wave function explicitly. So in a, so the p2, r2 bar are associated with the wave function is to be v bar r2, p2. OK, so this is for the e plus. And the k2, s2 bar are related to v s2, k2. OK, so this is related to the mu plus. OK, so according to our previous rule-- but in b the wave function for these two-- so the k2, s2, so the corresponding one is-- the one you want to exchange is u s2 bar, k2. This is corresponding to e minus and the p2, r2. And that's corresponding to u r2, p2, mu minus. OK. So now you see even when you make the label change-- suppose you make the label replacement from here to replace by that. You're not going to change the wave function from v to u. OK, so the wave function actually is different. So similarly this does not-- so this does not work if you have fermions, OK, if I have fermions, because the function changes. It changes from v to u, in this case here again also v to u, OK. But actually this is another problem if we consider the unpolarized spin sum actually still works, OK, for unpolarized. So if you have an unpolarized situation-- remember, we need to sum over all the spins. And consider M squared, sum of all the spins, OK. so we will involve this y sum with itself. For example, when you sum of all the r2s, then we will have something like this. So, for example, we need to sum over r2, then we will have a combination like v, r2, p2-- that's the calculation we did before-- and v, r2, bar, p2, OK. And then this one will give you just minus i p2, slash, plus m of the electron, OK. And now you can just do the replacement. Let's just do what was written -- the p2 replace it by minus k2. So if you do the replacement, then that goes to minus i, slash, k2 plus still the electron mass, OK. And then this is the same as the sum s2, u, s2, k2, u, s2, bar, k2, OK. So despite that the wave function are different, OK-- so the wave function. But after you do the spin sum, OK, they actually get the same answer, OK. So this is the same as the minus this one. OK, so in this case after you do the spin sum-- so each replacement of momentum-- so in this case just give us a minus sign, OK, when you calculate this spin sum, m squared. But now since we need to replace two of them, OK-- so 2 minus sine is a positive sign. So we conclude that the sum over spin Ma squared evaluated at p1 minus k2 and k1 minus p2 is the same as the sum over spin, the amplitude for b squared divided p1, p2, k1, k2, OK. So there's a simple relation between the amplitude square when you do the spin sum, and we just need to rename the momentum, OK. Rename the momentum. Good? Any questions on this? So for unpolarized amplitude-- so we still have this very nice relation between these two process, OK. If you calculated one of them, then you don't need to calculate the other. You can just immediately get the answer. You just change a few momenta, change a couple momenta. But if you consider the polarized amplitude-- So the story is now is much simpler. Now you do have a problem because now the wavefunction are different, OK. But nevertheless, you can choose a different basis, OK. By choosing appropriate basis, you can still directly relate -- equate the amplitudes, OK. You can still recreate the amplitude. OK, so, yeah, I will not go into there, OK. But it's possible, say, you choose a basis of v2 and the u2 and u and then somehow you can relate them to each other, OK. So this relation between the amplitude of a and b-- so this is called crossing symmetry. Yeah, let me just call this star. So star is called crossing symmetry. So calling it symmetry is actually a misnomer because this is not a symmetry, OK. This is just a relation between the amplitudes of different process. OK, so this is not a symmetry. So the symmetry should be put as quotes, OK. So this just express the relation of one amplitude to the relation of another amplitude. Yeah, just relation between the amplitude of one process to the amplitude of another process. And the relation come from a fundamental fact, come from a very simple fact-- OK, so if we consider this fermionic field, psi-- so psi can play two role. It can either annihilate-- remember psi is called a plus b dagger, OK. It's called a plus b dagger. So you can either annihilate an initial particle-- so that's the a part-- or can create an initial antiparticle or create, sorry, a final-- sorry, an antiparticle. OK, so let me just elaborate. So if we have a, when you act on this side-- when you act on this side, you only have-- if you have a particle in the initial state, then a can annihilate that particle. But if a acts on the left, act on the left, a becomes a dagger. And then that creates an antiparticle. Yeah, so in such a process, X plus a goes to Y. And then you can just-- so this relates to-- the X and Y are some combination, some combination of particles. And then this just relates to-- X goes to Y plus a bar, OK. So here just X and Y, some collection of particles. Yeah, a's just some particle, OK OK, so just confirm that, OK. Good. Any questions on this? Yes. e minus all mu. Yeah, just say you have one field for e. You have one field for mu. Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry? AUDIENCE: [INAUDIBLE] PROFESSOR: Sorry, say it again. AUDIENCE: The process, like for y, can we think of it as-- first, X to Y and a bar then a bar, like annihilate or do something like that? PROFESSOR: Yeah, heuristically you may-- but it's simpler than that. Yeah, it's simpler than that. Yes? AUDIENCE: Just regarding energy conservation. So if you can keep doing that can't we just say that like out of nothing comes Y plus a bar plus X bar. PROFESSOR: Yeah, so that's why you have to change the momentum. So some momentum for this process. Corresponding to some momentum for that process. Yeah, we have to switch the initial and final momentum. Yeah, it's not to say these two processes are the same. It's just that this process for one set of momentum have the same amplitude as the other process for some other set of momentum. Yeah, there's a relation between them. Other questions? OK, so now let's consider another important process called Compton scattering. So Compton scattering played a very important role in the early days of physics and in showing that actually this is one of the early experiments to show that the macroscopic world is governed by quantum mechanics rather than classical mechanics. So now let's consider how to calculate the physics for this process. So Compton scattering is a process you have a photon hit the electron. And then you get another photon. And then you get another electron, OK. And so we can draw the Feynman diagram for it. So one Feynman diagrams is the following. Let's just imagine you have a fermionic trajectory. So you can imagine at some point there's a photon coming in. OK, so this is the electron line. So the electron line, the arrow is the charge line. So the photon line, the arrow should be understood just as a momentum, OK. And then you emit another photon as a final state, OK. So this is the simplest Feynman diagram for this process. But actually there are two diagrams because you can also-- for this electron line, you can also first emit a photon and then absorb a photon can also have that, OK. So you have two. And these two processes are not the same, OK. So now let me put some label on the diagram. So, again, the fermion is p1, r1. So electron, initial state, and let's put its final state to be k1, s1. And the photon, let's put its initial state to be p2, alpha. So alpha now is a polarization for the photon. And so call it alpha 1. And then this one is called k2, alpha 2. And then label this index to be nu and this index to be mu, OK, because the photon carry a vector index because the polarization carry a vector index. So there's mu here. So I always imagine the momentum is coming in for the initial state and the momentum come out for the final states, OK. And for this diagram, it's the same thing. So I have p1. So I label the p1, r1. So this is the p2 alpha 1. And so this is k1, s1. And this is k2, alpha 2. The only difference is the role of mu and nu is switched. So for the out leg is mu. So mu now is here. And this one is nu here, OK. So nu is associated polarization of the incoming photon. And the mu is the polarization associated with the outgoing photon. So this is my Feynman diagram. So this process compared to the one we considered, this e plus, e minus 1. So these have some new elements. So that's why this is a good example to look at. There are two new elements. First, in this example, we have-- so this propogator is a fermionic one. So now we have a intermediate fermionic propagator, OK. And the momentum for this one-- so let's call it q1 equal to k1 plus k2. So this is p1 plus p2. But this one will have a different momentum. So here let's call it q2. So what's the momentum for this one? If I draw the momentum going to the up? Yeah, so let me put it here. So what would be the momentum this one? Let's go with q2. Can you read the momentum of this intermediate line from the diagram? Yeah, it's p1 minus k2 because we have p1 coming in and then k2 come out. So that's the momentum, OK. So in this one, you have p1. And then you have the p2 coming in. And so this is p1 plus p2. OK, so the new element is that now we have a fermionic propagator as an intermediate state. And the second new element is now we have photon in the external state. OK, so this is a new element compared to this e plus, e minus example. But still using our previous rule, we can immediately write down the amplitude, OK. I think I may not have enough space. Yeah, let me try-- so let me start from here. So the amplitude, again, the i times the-- this i is not important. But nevertheless, let me just write it down. So the amplitude is given by-- so we follow-- you see we follow the fermionic line. So here there's one fermionic line, OK. And so we should follow that fermionic line. And then we also have the photon polarization. OK, so let's first write down the photon polarization. So for this photon final state, then we just have epsilon, mu, alpha 2, star, so k2, OK. So, remember, for the photon in the final state, we need to put the star. And also, remember, the mu is associated with the photon in the final state. And then we have epsilon nu for the initial state of the photon, alpha 1, then k1. OK, so this is the photon one, OK, photon polarization factor. And now we need to write down the-- so these two factors are the same for both processes. OK, for both processes we are not changing the external photon state, OK. So these two factors are the same, OK. And now the rest-- so now let's look at this diagram and follow this fermionic line. OK, so we start with here and then going backwards. So we just have u, bar, s1, k1 for this electron in the final state. And then we have fermionic-- and then we have this vertex should be minus ie, gamma mu. All this order is important because they are all matrices, OK. They all have spinner indices. And so this is a matrix. And then we have this fermionic propagator, which is minus 1, i q1 slash. So I just write down the fermionic propagator minus m plus i epsilon, OK. So I did the final state, intermediate propagator, and then I have another vertex. And now it's nu. And then I have the final. OK, so that's that diagram. OK, that's the diagram. And now for this diagram, we do the same thing. So the final state, again, is the same. It's the u, s1, bar, k1. But now here you have ie gamma nu. So now the order changed, OK. And then you have the photon-- then you have this fermionic propagator, which is i q2 slash plus m minus epsilon. And then you have minus i e gamma mu from here. And then you have the final state u r1, p1, OK. Yes? AUDIENCE: Where did you get these two Feynman diagrams, by doing [INAUDIBLE] which one gets absorbed or-- but if we were working with photons. It doesn't it matter. This seems like the real difference is not the kinematic difference of which one gets absorbed. It's the fact that we have this vector that describes with this polarization through the photons, right? PROFESSOR: No, these are just two inequivalent Feynman diagrams, inequivalent. AUDIENCE: But that's because of mu and nu, because they don't-- when you were writing down the Feynman diagram. PROFESSOR: No, no, no, even not for-- even they are not vectors, still they are inequivalent diagrams. Because the momentum here is different. The momentum here is different. So they're still an inequivalent diagram. OK? So these are the amplitude. So again, we look at unpolarized cross section. So unpolarized cross section. So let's look at the unpolarized situation. OK, so, again, we need to average over the initial spin and sum over the final spin. So here now the difference is now we need to average also over sum over the index for alpha and for alpha 1 and alpha 2. So alpha 1 and alpha 2, they only have two polarization. So, again, when you average over the alpha 1, you get a factor of 1/2, OK. So still we get just a factor of 1 quarter sum over all the spins of M squared. So this is just the same as sum over, say, alpha 1, alpha 2, r1, r2, OK, M squared. And each index take value 1 and 2, OK, because the photon also only have two polarizations. So now I will not do a complete calculation of this. OK, so you just square it. So you use the trick. And you just square it. And then you just try to calculate it. And now you calculate this guy. Now we need to use some more tricks, OK. So I will just explain what are the new tricks needed to compute this guy. And we will not actually do a calculation, OK. I'll do a general calculation. So the new elements we need to use-- so a few new tricks in order to do this, to do the calculation. So the first trick is how you treat this fermionic propagator. OK, so how you treat the fermionic propagator. So we can just rewrite-- yeah, so this is the inverse of a matrix. OK, so we can just do the-- we say maybe has a propagator like this, i k slash, plus m, plus epsilon. This will not worry about epsilon because the downstairs neighbor 0-- so this is the same as i k slash plus m, k squared plus m squared, OK. It's just because of the familiar relation we used before that ik slash plus m and ik slash minus m is equal to minus k squared minus m squared. OK, so you can just-- then the inverse of this matrix is just given by this matrix. Sorry, I think here should be minus sign. The inverse of this matrix is given by that matrix, OK. So then you can use this to simplify. Then you can use to simplify that expression a little bit, OK. So this is the first thing used. And the second thing used-- is when doing the spin sum. So for fermions, for spinors we use the same trick as before as the one we used for the last example. But for photons, there are some new elements, OK. So now let's discuss how we treat the photons. So now I treat photons. And the way to treat the photons actually involves some important physics. So let me explain a little bit how we do that. OK. So now if you look at the structure of this amplitude-- OK, so let's focus on one of the photons. So let's look at the final photon, OK. So the polarization has the following form. Yeah, this just look at one of the photons. It doesn't matter. Say, let's look at this one, initial photon. And then the amplitude-- then you have this. Then the whole amplitude is scalar. OK, so the amplitude then has the structure. They have epsilon nu star -- nu alpha -- times M nu. OK, just say this is the polarization. And the rest I just call it M nu because the index has to be contracted, OK. So now I do the spin sum. So if I now some of nu to the spin sum over the alpha, and then I just have alpha equal to 1, 2 to the spin sum relevant for the alpha. I just have alpha 1, 2. And then I have epsilon nu alpha star, alpha nu alpha, alpha mu star, alpha nu, alpha, and then I have M mu star and have M nu. OK, I just take the square of it. So now to do the spin sum, we looked at this object. OK, I need to look at this object. So do you remember what is the sum of this object? Yeah, exactly, the transverse projector. OK, because of the physical photon polarization is projected to the transverse space, OK. So this gives you the transverse projector. But now I claim-- OK, so if you do the alpha 1, 2, mu, alpha star, nu, alpha-- so this gives you the transverse projector. But now the claim-- but transverse projector is a little bit awkward to work with, OK. But the claim, you said actually I can replace the sum of alpha equal to 1, 2 by actually sum over alpha for all polarizations, OK. So now when I sum of all polarizations-- so if I sum of all alpha, then now do what is this? Do you remember what is this? You just get eta mu nu. OK, and this one is much simpler. So now the claim-- OK, so now the claim is that we can just simply replace this sum by eta mu nu. OK, we can replace this by eta mu nu. Yeah, so this is the claim. OK, so this amounts to the following. So this is equivalent to the following. So if you look at the difference between these two-- so now if you look at the difference between the alpha equal to sum over 1, 2 and the alpha sum over all here-- so the difference in other words-- we claim that sum alpha equal to 0 and 3 epsilon alpha star mu and alpha nu alpha M mu M nu to be 0, OK. So the difference between the 2 is just from sum 0 to 3, OK. So in other words-- so if I write down this explicitly-- OK, so this corresponding to-- so this corresponding to 0, epsilon mu, 0, M mu squared is equal to epsilon mu 3 M mu squared, OK. So this claim, you see equivalent to this claim. OK, so these two will cancel each other. The equal here is because the signature, the zeroth component, there's a minus sign, OK. So it tells you that the sum of these two actually will cancel each other, OK. So now let me remind you how we choose the-- yeah, now let me try to prove this fact. OK, turns out this fact is actually very important. It contains very important physics here, OK. So do you have any questions before I do that? OK, good. So now let me try to show this is true, OK. So let me just remind you of a convention is that epsilon mu 0 is equal to just 0, just 1, 0 and mu 3 is equal to 0 and then in the direction of the k. OK, it's in the direction of the k. OK, and the epsilon mu 1, 2 will be orthogonal to both of them, OK. Be orthogonal to both of them, OK. So now to see this equation, let me call this equation star, star. So let's consider just a general physical process. We can actually make a general statement, OK, not just restricted to that particular example we have here. So let's just consider some general process. So you have a bunch of initial states and some final states, OK. But imagine one of the initial state is a photon. It's the one we are interested here, this polarization k alpha, polarization k alpha. And so now the amplitude just become mu and then this is the same as the-- you have some final state. And then you have some initial state. But within the initial state, there's a k alpha state, OK. Then the k alpha state corresponding to the photon. But now, remember, which we discuss in your homework, OK, so that's the purpose to put it in your homework. The final state for k alpha, which defined by transverse polarization, it's only a representative in the equivalence class of states. And within this equivalence class, they are related by these called null states, OK. So we can shift it by our null state and the physics will be the same. So, now, remember the last state-- it's like a gauge transformation. So the last state corresponding to-- and this process corresponding to you change your initial-- you change your polarization to that corresponding to a null state. And null state, the feature is that it's polarization is proportional to the momentum. OK, so that's how we are-- remember, we showed that-- because this is a gauge transformation. OK, this is like a gauge transformation. And we also discussed the reason this is a equivalence class because when you shift by a null state, the overlap between the null state to any state is 0, OK, to any state is 0. And so you can shift by a null state. So the fact that you can shift this by a null state, your physics is the same. So now you see this equation 2 here. So this implies that M mu k mu must be zero. k mu must be 0, OK. And this is a very important identity. This is called the Ward identity because this is a very important feature, which can simplify your calculation a lot, OK. So the M mu must satisfy the feature that when it contract with k mu you get 0. So now remember the k mu, you just equal to-- so this is on-shell external state, or minus. OK, this is the on-shell external state. And then this is just given by-- if I take a factor k out and this is given by minus 1 and then k-- and this is a fact I think you also used in your pset-- and this is the-- and this thing is just the same as the difference between the zeroth polarization vector and the third one from what we wrote here. I think I get the sign. Yeah, the sign does not matter very much here. I mean, just make sure. Yeah, for my sign this would be-- yeah, it's minus epsilon. Yeah, that's right. And then this equation just implies that epsilon mu 0 M mu is equal to epsilon mu 3 M mu. So that tells you that epsilon mu 0 M mu squared is equal to epsilon mu 3 M mu squared. OK, so this is the one we wanted to prove. So this is the one we wanted to prove. So this simplifies life a lot. So now we show that we can actually make this replacement because the 0 and the 3 component add together become actually a null vector. And that actually does not contribute. So now this amplitude we have two external photons. So we can write it as-- OK, so for simplicity-- so for our amplitude, so back to Compton case, to the Compton story. And then the amplitude then have the form-- so we have 2. So I have epsilon mu, alpha 2, star, and epsilon mu alpha 1. And then this will multiply some T mu nu. OK, so I call the rest T mu nu. OK. So now when I do the spin sum, M squared, then I can just use this twice for each epsilon. And then I just get-- essentially I just get-- let me just write one more step. I will get alpha 1, alpha 2 equal to 1 and 2. Then I have epsilon mu alpha 2 star, epsilon lambda alpha 2. So epsilon nu, alpha 1, epsilon rho, alpha one star. And then I have T mu nu, T lambda rho star. OK, just the square of this guy. And now I can replace this by eta mu lambda, I can replace by eta mu rho. OK, and then this just become equal to eta mu, lambda, eta mu, rho, T mu, nu, T lambda, rho, OK. So this just makes life much easier, OK. Makes life much easier. Yeah, so essentially these are the two important tricks which one need to do in the photon in this Compton case compared with this story. And once you have these-- and this part will be just the spin sum for the fermions. And then that we can just use the same trick as we did last time, OK. So we will not repeat that, OK. So now let me just write down your final answer, OK. And now we can just write down the final answer. So before writing down the final answer, let me just define the frame. So for the Compton scattering, it's often convenient to consider the reat frame of the electron. So we consider photon because there's no rest frame for photon, OK. So it's often convenient to consider the rest frame of the electron. So let's just consider the picture. When you consider the rest frame of the electron, -- so this is often the way roughly often also experiment is done. OK, so you can essentially consider the, say, electron in the matter, which they don't have much velocity. And then you can have a photon come in. So imagine you have an electron here and you have a photon come in. OK, you have a photon come in. So let's call this, the incoming axis, to be the z-axis. OK, and then after scattering the photon, say will be scattered into this direction. OK, so let's call this direction theta. OK. So in this setup-- so then the momentum of the electron is just the m, 0, OK, just the mass of the momentum. Or I labeled the momentum. And the momentum of the photon, incoming photon, p2, would be just omega 0, 0, omega. OK, so only have the momentum in the z direction. Then the final momentum of the electron is k mu, k1. So this can be something-- OK, so let's not worry about it. Then for the photon-- so photon momentum, final photon momentum-- so let's call it-- yeah, so this is k2. OK, so k2 we can parameterize it by omega prime. And then omega prime then going into the spatial direction, the unit vector in the spatial direction for k2-- let's call it n. OK, so this is the direction of n, which is the theta angle of respect to the z direction, OK. And then the k1 just can be obtained by momentum conservation from these quantities, OK. So k1 would be just p1 plus p2 minus k2. So from the fact that the electron have a static mass minus m squared, OK. And then this relation-- then the p1 plus p2 minus k2 should satisfy the constraint that this should be equal to minus m squared, OK. It should satisfy the constraint -- from this equation, you can actually solve omega prime in terms of omega. OK, so we will not write down this equation explicitly. You can easily check yourself. You can easily check yourself. So you find the omega prime is equal to omega divided by 1 plus omega m and the 1 minus cosine theta. OK, so this is the final frequency of the photon, expressed in terms of initial frequency of the photon and the mass of the electron and then this scattering angle, theta. So now let me just write down the final answer, with this set up. And then you find in the rest frame of the electron, the differential cross section can be written. So again the phi direction would be symmetric, OK. And so this only worry about the theta direction. So this can be written as pi alpha square, so alpha against the fine structure constant divided by m squared and then given by omega prime, the final momentum frequency for the photon divided by omega squared and omega prime divided by omega and omega, omega prime minus sine squared. OK, so this is the final answer. OK, so this is the final answer for the Compton scattering and the cross section. OK, so this formula actually contains a lot of physics, contains a lot of physics. So let me just describe some of the physics here. So let's first consider the regime that the photons have very low energy. OK, just low energy photons, shining some light, shining on the-- so suppose the photon have very low frequency. They're much smaller than the electron mass. So now if you look at this formula-- so if the initial photon is much, much smaller than the mass and then this factor is approximately 0, then you find in this regime that omega prime is actually approximately equal to the omega. OK, so you find in this regime the frequency actually does not change when the photon scatter away from the electron, OK. And then now omega prime equal to omega. So this factor becomes 1. This factor become 1 and 1 minus sine theta equal to cosine square theta. And this becomes one. And there you find then d sigma, d cosine theta just equal to pi alpha squared divided by m squared then plus 1 cosine squared theta. OK, so this is a famous formula, more than 100 years ago or even more. Because this is the classical-- so this is called the Thompson cross-section. OK, so these are Thomson cross-section derived from the classical electrodynamics. OK, and so this is a result already known in the 19th century from the classical electrodynamics. So how people derived this formula in the classical electrodynamics-- so in classical electrodynamics, you do it this way. So in classical electrodynamics-- so light is believed to be a wave, electromagnetic wave, OK. So essentially, it's an electromagnetic wave. So this electron, or this electric field, so with a polarization, and then the magnitude. And then it's a wave, OK. It's a plane wave. OK. So from the classical electrodynamics, the light is just a wave, OK. When you shine the light on the material, the light will scatter. Under the physical description of the light scatter is the following. So this is an oscillating electric field. So if you have a charged particle like electron, then it will oscillate because it has a charge under such an E. So you can approximate the electron in the matter just by a forced harmonic oscillator. OK, it's the forced oscillation. It's a forced oscillation. So you should have learned in 8.03 that under a forced oscillation, essentially the electron velocity electron will have acceleration essentially given by the given by this. Yeah, let me just-- proportional through the exponential i omega T plus i kx. OK, essentially just the force acceleration were like this. And then from classical electrodynamics because this is accelerated motion. This is accelerated charge. And accelerated oscillators in the classical electrodynamics will emit waves. OK, we emit electromagnetic wave. So this will radiate. So this will lead to radiation with frequency omega. So essentially controlled by this acceleration, OK. So this is the classical result in the classical electrodynamics that such a oscillator will emit the light with frequency omega. So that's the scattering process from the classical point of view. OK. From a classical point of view, drive the oscillator, the oscillator will emit, OK. And if you go from this process, then you precisely find this formula, OK. You precisely find this formula, OK. So in this case-- so regarding this, the value of omega, the omega prime, the emitted photon always-- so in this story, omega prime always equal to omega, OK, because it's a driven oscillation, OK. So classically, you always have this. So you always have elastic scattering. So the photon frequency does not change. And then the equation-- so let me call this equation 1 and the equation 1 holds. OK, yeah, and then equal this a and b. And equation 1 holds. So the classical, very robust prediction is that you always have elastic scattering when you shine light on this Compton. And then the equation, you have this Thomson cross-section. But then now in the early part of the 20th century, when Compton did this experiment-- and then he said-- then he observed actually the frequency can be smaller, OK. So in this formula, generically omega prime is smaller than omega. OK, but in quantum mechanics-- But in quantum mechanics, we see that omega prime is generically smaller than omega big because this is 1 plus a factor, which is greater than 1, OK. And in particular, this deviation will become more obvious when omega become comparable to m when omega become comparable to m. Yeah, so this is the quantum mechanics. So you will always have inelastic scattering. So this makes sense because just for momentum conservation in actual-- if some photon hit the electron, electron needs to move. The electron need to move. Then, of course, the energy will increase. And then the energy will increase, in the omega prime of course, have to be smaller than omega, OK. So in the rest frame, an electron always gain energy. So that's why in quantum mechanics always inelastic scattering, OK. And then the star under this equation 1 no longer holds OK. So in the early days-- so in the early days, this just cannot be explained using classical electrodynamics. OK, so this is decisive evidence that actually photon-- the light behaves very different quantum mechanically from classical wave, OK. In fact, this simple fact can be just explained by treating a photon as a particle. OK, so this is the decisive support that the photon actually behave like a particle, OK, behave like a particle. So leads to the particle picture of photon, OK. OK, so we are out of time soon. So let me just mention one simple application of this. So in particular, let me just mention two small things. So one thing is that if omega is much, much greater than m, then omega prime divided by n becomes very small. OK, so this actually becomes much, much smaller than 1, except for theta equal to zero, OK. So if the initial photon momentum energy is very big, then actually most of the energy will go to the electron, OK. And then the final frequency is actually become much, much smaller than initial frequency. This is one remark. And another remark-- so this is in the electron rest frame. But in a different frame, you will actually get a very different answer. So now let's imagine we have a frame-- So in this frame the electron just sits here. The photon comes in the z direction. Now, imagine we boost the system in the negative z direction, the negative z direction. So the result of this boost is that the photon energy becomes very small. And the electron energy will become bigger and bigger when you increase the boost, OK, because the boost is opposite to the direction of the photon so that you boost more. The photon energy will become smaller and smaller, OK. So now imagine you go to a frame, which photons have a very small energy compared to electron. And then this is a very-- then in that frame will be a very fast electron heat of a very low energy photon. OK, so this is called the inverse Compton scattering. OK, so in this case, then the energy in-- then the energy in the-- initial energy in the electron then can be transferred into the photon. Yeah, you have a very fast, very high energy electron hit a a low energy photon. Then you just give a big kick to the photon. And the photon can have very high energy. So this is a very simple effect. But it can have very important astrophysical applications. So this is called the important application in this called the Sunyaev--Zeldovich effect. So essentially they have the simple observation like this. So in the universe, we have the-- we have microwave background radiation. OK, the microwave background radiation, the photon is very low energy because the temperature is very low. But around a galaxy-- so if you're near the galaxy center, so near the galaxy cluster-- and the galaxy cluster, some of the electrons, they can have very high energy. And when they scatter and the microwave background photon and then they can give those microwave photon a big kick, and then they will-- then the photon will get a very high energy. So then by looking at the photon spectrum in the sky, look for this kind of hotspot. And then this is a way to detect the galaxy cluster. OK, then you can just use this inverse Compton scattering that detects the location of the galaxy cluster. So it's a very cool-- it's a very cool application. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_21_Quantum_Maxwell_Theory_continued.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Last time, we started talking about the quantize-- the Maxwell theory in the Lorenz gauge. So in the Lorenz gauge, we consider the following action. OK. So we showed earlier that the Lorentz gauge can be ensured just following the equation of motion. So you just get from here, the equation of motion, and then the equation of motion-- yeah, so the equation of motion will lead to partial square, partial mu A mu equal to 0. So this is ensures partial mu a mu equal to 0. You just have to make sure your boundary conditions, such as the partial mu A mu equal to O. OK? And in particular, the action is particularly simple for xi equal to 1, in which case we just have-- actually we just have [INAUDIBLE].. OK. So as if you just have 4 decoupled massless scalar. OK? As if you just have 4 massless scalar. And the equation of motion is actually all very simple. So we can just proceed. Copy our result before for massless scalar. And just treat each A mu as a massless scalar, and then we can just do it. For example, the canonical momentum conjugate to A mu would be just a mu dot, just A mu dot. OK. So the canonical quantization condition, then, is given by-- and we can also just straightforwardly write down A mu, the expansion for-- the operator expansion for A mu. OK. So we have four of them. So again, we will-- instead of writing them as 4 massless scalar, we will-- as we did before, for the Coulomb gauge case, we introduce a polarization vector. So we will introduce a polarization vector. So there are four possible things. There are four components. So there are four possible polarizations-- times. OK. So we just get that. So this epsilon mu alpha for alpha equal to 0, 1, 2, 3-- so these are four polarization vectors. OK? You can choose. You can just choose, say, 1, 0, 0, 0, 0, 1, 0, 0, et cetera. And then this just like four decoupled scalar, OK? But this way introduces a polarization vector, allowing to write them in a more general way. So we normally pick-- for example, the 0-th component, the 0-th polarization vector just to be along the time direction, OK? So just along the time direction. And then we also introduce another-- so the number 3, we introduce to be proportional to the direction of the momentum. So this is proportional to the-- parallel to the momentum, to the spatial momentum. So the k mu here would be-- because this is massless. So we just have this. OK so so the 0-th component is just equal to the magnitude of k. And then we take the epsilon 1, 2 epsilon mu, then 1, 2 to be, let's say, orthogonal to the momentum. So these are called the transverse polarizations. So we take them also to be orthogonal to 0-th component. That means there's no-- the time component for 1, 2 will be 0, too, and then also it should be proportional-- it should also be orthogonal to the momentum. So we take, also, them to be orthogonal to each other. Eta mu nu. We take them to be orthogonal to each other. This is the normal, orthonormal condition. OK? Take these spaces to be orthonormal. And then this is also complete. Means that if I sum over different alpha and beta-- and actually I can get back to eta mu nu. OK? So this is called the completeness. So they form a complete basis. So any Lorentz vector can be expanded in terms of it. So if we take, say, k to be the z-direction, then the simplest choice would be, say, you just do-- so we just have that. OK, so this is simplest. But you can consider more general polarization. And so now, if you plug in this expansion, as we do, for the scalar, and then you plug in back to that commutation relation, then you will get the commutation relation with a. So this is straightforward. So if we call this equation star-- star star, this equation star. So plug star star into star, and then we get the ak alpha ak prime beta dagger should be equal to beta alpha beta 2 pi cubed delta -- prime OK. And the the rest commutators are all 0. And then you can define the vacuum. Define a annihlated by all the alphas for any alpha and k. And I will introduce the Hilbert space. I call it H big to be the psi-- the collection of psi built by acting ak alpha dagger on 0. OK? So you can just add arbitrary number of them on 0, then you get your Hilbert space. So we call this Hilbert space to be my big Hilbert space. And you will see the reason we do that. OK? So later, we will see why we call this H big. So this is seemingly all fine. But of course, there are problems. This cannot be right, because the Maxwell theory, we said, only has two transverse degrees of freedom. But here we have four. We have four massless with degrees freedom. We must have too many. And indeed, you see there, here, are problems. So one problem is the following. So one problem is the following. So there are problems. So one problem, yeah-- so the zeroth order problem is that we have four massless degrees of freedom, two too many. OK? So we know that we only should only have two. And the second, you see that this is problematic. Is that, you notice-- so here, we have eta alpha beta eta mu mu. So this is all just from Lorentz covariance, OK? But this eta alpha beta is problematic because if you look at the commutator for ak 0-- OK, so the zeros-- that polarization, then you have ak 0 ak prime 0 dagger equal to minus 2 pi cubed. Say delta k minus k prime. OK, you have a minus sign. So this is problematic because if we, say, create particles-- say if we create the particles by acting ak alpha dagger on the 0, and then you find, because of this minus sign, this create the particle with polarization epsilon alpha mu. So now, because of this minus sign, you see that if we look at the overlap between the particle with the 0 polarization-- so if we look at k 0 with some k prime 0-- and then because of this minus sign, we have minus. So this will be proportional to minus 2 pi cube theta cube k minus k. So this is smaller than 0. So you see, actually, in the product-- so that means that this state actually has negative norm. That means this state is negative norm. But this is actually a good sign. So this is another way to see that we have too many degrees of freedom. So that means not all degrees of freedom here can be physical. So here, it just tells you there are some degree of freedom here that must be unphysical because they have negative norm. The states they create have negative norm. So they cannot really correspond to genuine physical states. But those problems should not worry us because, so far, what we have done is to quantize this theory. So far what we have done is to quantize this theory. But this theory is not the Maxwell theory. To get the Maxwell theory, we have to do two more steps. One more step-- one step is to impose conditions so that makes sure this partial mu a mu is equal to 0, because we haven't fixed the gauge. The second condition, we mentioned before. So there are two more things we need to. So first is that we need to ensure the gauge partial mu A mu is equal to 0. The second thing, as we mentioned before, is that after fixing the Lorenz gauge, they're still residual gauge degrees of freedom left. And we have to fix the residual gauge freedom in the Lorenz gauge. OK? So when you do this, then you will get a physical Hilbert space. You will get the physical Hilbert space. And this physical Hilbert space, we will see-- you will see-- will only contain, indeed, two transverse massless degrees of freedom. And the two degrees of freedom here are gotten rid of, I got rid of. And so in the past, I described those procedures in detail in class. But actually, that didn't have very good effect. So later, I put it in your Pset. Actually, I think it worked better, so forced you to go through it and to think through it, how to do these two steps. Because those steps, they are not difficult technically, but you need to think carefully. You need to think carefully. So let me just make some comments on this step. So recall in the Coulomb gauge-- so in the Coulomb gauge-- so recall in Coulomb gauge, we impose this condition. We essentially solve this condition. We impose this condition. So classically, we impose this condition, essentially, as part of the equation of motion. And we solve the A, which satisfies this condition. We just one into two transverse A, OK? And then quantum mechanically, because we impose it as a classical equation-- we impose it as part of the equation of motion, classically. So quantum mechanically, this becomes an operator equation, which we impose on operators. But in this case, we can no longer impose. But in Lorenz gauge-- so this leads to the operator equation. But in the Lorenz gauge, we cannot impose partial mu a mu as the operator equation, because the equation of motion already almost implies this is equal to 0. We just have to impose some boundary condition to ensure that, indeed, this equation only has solutions corresponding to partial mu and mu equal to 0. So if you impose this separately as the-- yeah. So you only need to impose them as boundary condition. So quantum mechanically, that implies we cannot impose this as an operator equation. Indeed, you are showing your pset, and it will be inconsistent if you impose this as an operator equation. So what does it mean at the common level corresponding to classically you impose the boundary condition? So classically-imposed boundary condition limits possible configurations you can have, limit possible configurations you can have at the quantum-mechanical level corresponding to restriction on the states. So the quantum level, it turns out that the right thing to do is to impose this kind of condition on the states, require your states to be annihilated by something like this. But it turns out, actually, the story is more subtle than that. If you just require the state annihilated by this thing, actually it does not work. And so you have to do a little bit more subtle. So the fun will be in your Pset. You will go through that. You'll go through that. So do you have any questions? Yes? AUDIENCE: Why is it too much to just say that is 0? I guess it makes sense that you can just fix the boundary conditions but -- PROFESSOR: Yeah, so classically, it does not matter. As a classical, it doesn't matter. But quantum mechanically, you will show you a P set if you impose this as an operator equation. Then that's incompatible with this canonical quantization condition. AUDIENCE: Yeah. PROFESSOR: Yeah, here, I just motivate that classically in order to ensure this unity to impose the boundary condition, not as the equation of motion. So quantum mechanically, we just-- yeah. Other questions? Yes? AUDIENCE: Is there a reason why the subspace with positive norm is like closed -- a closed Hilbert space? PROFESSOR: Yeah, you will see. You will see. You will see what's happening, yeah. So it turns out-- yeah, let me just say some words. So it turns out when you do this step, and then you will eliminate-- so the step one, we eliminate the negative norm state. AUDIENCE: OK. PROFESSOR: So you find that once you impose properly this kind of condition on the states, and you actually eliminate the negative norm state. But then you find after you eliminate the negative norm states, in this Hilbert space-- in this Hilbert space, there's still states of zero-norm. There's still states of zero-norm. And then when you fix the second, and then you eliminate those zero-norm states. And then you get your physical Hilbert space, because, remember, in the Coulomb gauge, everything is just like harmonic oscillator. There's no zero-norm state. You fix the gauge completely. But here, because you didn't fix the gauge completely-- and even after you eliminate those negative norm states, you have zero-norm states. And then they correspond to remaining gauge freedom. And once you get rid of them, and then you have your-- yeah. Good. Any questions on this? Yes? AUDIENCE: Instead of doing Maxwell theory, what if we're just trying to do four massless scalars? PROFESSOR: Yeah. AUDIENCE: And if you try to do it with these vectors, you'd still get these negative norms. What's going on there? PROFESSOR: Good, good, good. That's a very good question, which I'm waiting for you to ask. So the key thing-- so you say we can certainly consider four massless scalar field. If I consider four Maxwell scalar field, there should not be a problem, why there should be negative norm state or zero-norm state. But the key is that here, the states are contract-- this A mu A mu are contracted using the Lorentzian metric. And so if you look at this Lagrangian, the 0-th components actually have the opposite sign to the standard massless scalar field, and because of that. So this is not your ordinary massless scalar field. They actually have opposite signs in your Lagrangian. But you say, oh, can we just change the sign for that component? You cannot because we have Lorentz symmetry. Because we have Lorentz symmetry. PROFESSOR: So the Lorentz symmetry forces you, somehow, when you reduce the Maxwell theory to this four massless scalar, one of them have the wrong sign. So that's why-- so that the wrong sign is related to why you have the wrong sign here. The same kind of Lorentz covariance tells you here is eta alpha beta rather than delta alpha beta, and then delta alpha beta. Good? OK. So this concludes our discussion of the canonical quantization for the Maxwell series. So here I will quickly describe how to do it for the path integral. So I will just do it very quickly, because many of the elements are familiar before. And so I will just point out things which are new for the Maxwell case. OK? So we already familiar how to do the path integral. So Maxwell theory is a free theory. So in principle, we know how to do it. So let's just-- yeah. So here, let's go back without fixing the gauge first, OK? So this is now completely starting from the original, just starting from the Maxwell equal to minus 1/4 as with mu F mu mu. So let's just go back to this theory. Let's go back to this theory. And then let's just try to do a-- try to see how the path integral works. So remember, this is a free theory. So we can just write down the generating functional for this theory. And then we can, in principle, calculate all possible correlation functions here. Free theory is supposed to be very simple. So we just write down the generating functional, which we integrate over all A mu. And then we have i S-- so this is Maxwell theory-- then we add the source. So this J mu will have nothing to do with electromagnetic current. This is just a J mu with the generating function used to take derivatives. So this is just an external source. So now this is a Gaussian integral. So in principle, we can just do it. Because this is quadratic, and now we are familiar how to do the quadratic integral. We can just do it. And to do it inside, we always write it in the form like this. We write it in the matrix form. OK, we write it as a matrix form. And this K mu nu just can be read from here by integration by parts, et cetera. So let me just write down the answer. So this gave me mu just equal to-- so let me call this 0, call this 0 K mu nu. So this just given by partial square eta mu nu minus partial mu partial nu, and then the delta function. And the delta function. So this partial will only act on x. So naively, we can just-- so now this is just a Gaussian integral, so we can use principle just to directly write down the answer. Actually, let me keep this here. So in principle, we can just directly write down the answer, just this Z J equal to some number, some infinite number, which we never care, and the exponential i d 4x d 4y then J mu x. Then the K 0 mu nu minus 1. Yeah, so let me just write it like this. K0 minus 1 mu mu. x minus y J mu. OK? So this is just almost exactly the same as we do before. So this is like the kernel for your Gaussian integral. And you just take the inverse of it. And then you contract it with the source. OK, so you just know the only difference from the scalar case, or the spinor case, other than the functional space of x and y, now you also have a matrix in terms of Lorentz index, mu nu. Everything is the same as before. So it looks like, then, we already solved this theory, OK? But you should feel uneasy. You should feel easy because when we quantize it using the canonical method, we do have to go through some trouble. So how come, somehow, path integral just do it immediately? Somehow the trouble has to be conserved, no matter what method you use, no matter what method you use. Indeed, can someone guess what would be the potential trouble here? Yes? AUDIENCE: Are you overcounting, because when you integrate over, like, DA, you could have two-- two A's differ only by gauge? PROFESSOR: Yeah, right. AUDIENCE: Yeah, so then you're integrating over too many? PROFESSOR: Yeah, indeed. That's the correct statement. But if I just do it straightforwardly, somehow, then what's wrong with that? So, yeah, what you said is conceptually-- conceptually, you should suspect there's something wrong here. So whenever you conceptually suspect there's something wrong here, then you should technically be looking for some mathematical problem, because the conceptual mistake will always reflect in some mathematical difficulty. Yeah. Yes? AUDIENCE: So you have to integrate to measure [INAUDIBLE] PROFESSOR: Yeah. Yeah, you're right. It's just you integrate more things. You integrate more things, then you just get more infinite constant, but we don't care. AUDIENCE: Maybe like a sign will be off, because the sign was off earlier, right? So when you do the eigenvalues of the determinant or whatever, maybe it messes that up. PROFESSOR: Yeah. Yeah, but we have i, so it doesn't matter the sign. Yes? AUDIENCE: Does the K end up not being invertible or something? PROFESSOR: Yeah, exactly. So the key is that the K is actually not invertible. Turns out K minus 1-- so K 0 actually is not invertible. So the reason is simple. It's that because the gauge freedom in the original theory-- because the gauge freedom in the original theory, by gauge freedom, we mean that when you make a transformation by some arbitrary function, your action does not change. Then that means that this kernel must be invariant under some transformation. It's invariant under some transformation means that this must have zero eigenvector. When it has zero eigenvector, then they must not be invertible. OK? So now let's just see it explicitly.] To see it explicitly, it's easier to see this in momentum space. So if we see this in momentum space-- so in momentum space, we can just directly write down the K0 mu mu. It's just equal to k square, just k mu k nu minus k square eta mu mu. So now it's obvious this has a zero eigenvector because this is precisely the minus k square, the transverse projector. So P mu nu T is the projector into the direction perpendicular to k mu. So this satisfies the property that pT mu mu is equal to eta mu mu minus k mu k mu divided by k square, and satisfies the property p mu mu T k nu is always equal to 0. OK? So you can easily see it, because if you contract with this k mu, and k mu contract with here and give you k square, and cancel with k square here, and you have k mu, and then k mu contract with this one, you just get k mu. So they get canceled. So this is the projector to the transverse space. Of course, it has zero eigenvalue-- zero eigenvectors. So then we see that this is not invertible in momentum space, and then it will not be invertible in coordinate space. It will not be invertible in coordinate space. And now, in coordinate space, now we can see with this understanding, then now we can immediately see in coordinate space the eigenvector of zero eigenvalue precisely corresponding to a gauge transformation. It's because in coordinate space, k mu translate-- when you translate to coordinate space, means partial mu. So k mu to coordinate space means partial mu. So that means that for any function, partial mu lambda-- this is K0 mu nu, partial nu lambda must be 0. And you can check explicitly. So if you contract with partial mu here, and then you have a partial square and you have partial mu, and then eta mu mu will give you partial mu, again, they cancel. And of course, this is just precisely the gauge transformation. This is precisely the gauge transformation. OK, so the gauge transformation precisely is the zero eigenvector of this kernel. It's the eigenvector of zero eigenvalue of this kernel. So that's why this is not invertible. So the way to fix is simple because remember this picture. So this is the space of the full configurations of A mu. And A mu along those trajectories are equivalent. To quantize it, we need to just-- the physical configuration corresponding to a section of it. So what this says is that K0 is not invertible in this full space because you have zero eigenvectors along this direction, OK, zero eigenvector corresponding to this direction. But if we restrict to a cross-section, and then this zero eigenvector no longer exists. And then it will be invertible. OK, so once we fix the gauge, then we expect-- then the k mu after we fix the gauge should be invertible. Good? So now let's fix the gauge. So K0 mu nu restricted to a section should it be invertible. OK, so we need to fix the gauge. So we will do this for the Lorenz gauge. You can do similar things for the Coulomb gauge. The reason we do the Lorenz gauge, just the Lorenz gauge, later when we introduce interactions, we will always work with the Lorenz gauge. And when we discuss QED, we will always work with the Lorenz gauge. So here I will elaborate in the Lorenz gauge. Yes? AUDIENCE: Can you explain what you mean by when you do the cross-section, the eigenvectors here doesn't exist anymore? PROFESSOR: Oh, yeah. So this zero eigenvector-- so this is a zero eigenvector of the K0, right? So at the partial mu, partial mu lambda, just parameterize this direction. But if you restrict to a section of it, and then you are not allowed to worry away from it. And then this direction just gone, so you no longer have that zero eigenvector anymore. Yeah. Yes? AUDIENCE: Should there be a convolution in the equation on the bottom left? PROFESSOR: Yeah. Yeah, good. Indeed, you have to integrate-- yeah, you have to integrate over y. Yeah. Thank you. Yeah, if I write this precisely, you have to integrate over this y and y. Yeah, that's right. So this is the proper way to write it. Good. AUDIENCE: Why does this issue only happen for gauge symmetries? Like, why don't the other symmetries need to do the same? PROFESSOR: Yeah. Because other symmetries-- you have local symmetries. So the local symmetry tells you there's really some degree of freedom redundant. So this is really corresponding to-- so this lambda x, so all possible choice of lambda, is really corresponding to some finite trajectory in your configuration space. So if you have a global symmetry, global symmetry is independent of coordinates. So from this point of view, a global symmetry just relates to the configuration at this point to the configuration at that point and at that point. A global symmetry, because it's independent of spacetime, that transformation doesn't change your number of degrees of freedom. They don't correspond to actually a trajectory in your configuration space. Good. Other questions? OK, very good. So now we will fix the gauge. So in the path integral, to fix the gauge, in principle, is straightforward. So we will consider Lorenz gauge. OK. You can do the similar thing with the Coulomb gauge. Just Lorenz gauge, normally, we work with more. So how do we fix the gauge? So in the path integral, we do it easily. So we have this integral. Then we insert the delta function corresponding to the Lorenz gauge. So this should be considered as a delta function. So this is a delta function at one point, and then you take the product over all points. So this is the delta function in the functional space. And then we will have i S, this S Maxwell, and then you can have A J. You can consider the generating function. So this will restrict you on some cross-section, on some cross-section. OK. So let me make a remark here. So here, you actually have to be a little bit more careful. So what we actually really want to do-- OK, what we really want to do is we just want to insert here. We just want to insert here delta A mu equal to A mu gauge fixed. What we want, what we really want, to do is we want just to restrict A mu to some cross-section. But for example, here, for the Lorenz gauge, we can normally not solve this condition. So that's why we do here, because we don't know how to write this a mu fixed explicitly. OK? But going from here to here, actually there's a non-trivial Jacobian, because this is a non-trivial in the functional space. And then this is a non-trivial function on A mu. So in principle-- the related-- in principle, there is a Jacobian. So let me just write this explicit. A mu x are related by some Jacobian, then delta A mu minus delta A mu fixed-- uh minus A mu fixed, OK? But now, notice that this function is a linear function of A. So remember, when you do the Jacobian, you have to take the derivative. Yeah, just remember this function. Remember this formula. Delta f x is equal to 1 over f prime x delta x minus x 0. So x0 is a solution for f x equal to 0. OK? And so you have-- yeah, so this is a Jacobian when you convert the delta function. If you have multiple variables, then this would be the Jacobian. So this course, you take the derivative on this function. But this is a linear-- this thing is linear in A mu. When you take the derivative, you will get something which is independent of a mu. So this Jacobian will just involve in some-- it will be independent of a mu in the function space. Something independent of a mu is just some constant. So this just gives you some infinite constant. And so we always throw infinite numbers infinite number of ways, so we don't care about it. And so we can just write down this. So this is just a remark. So now we have to evaluate this guy. So now we have to evaluate this guy. It turns out that this guy is still not easy to solve. And for the same, it's not easy to do, just for the same reason. We don't know how to solve this A mu fixed. OK, we don't know how to solve this condition explicitly. So now I'm going to use some trick. I will use two tricks. OK, I will use two tricks to convert this into something manageable, to convert into this manageable. And so this trick applying to the Maxwell is like killing a little bird with a big cannon. And so, yeah, it works here, but it's real, genuine use is actually in the long Abelian gauge theory. So when you quantize the Yang-Mills theory, that becomes essential, because it will be very complicated quantizing using other methods. And so quantizing Yang-Mills theory become essential. But nevertheless, let me just tell you these two tricks, how to treat this here. OK? So this method is called Faddeev-Popov method, and it was invented by two Russians, former Soviet, Faddeev, Popov, in the '60s. So actually, there was a story behind it. So when they invented this method, they actually wanted to quantize not this Maxwell theory, this non-Abelian Yang-Mills theory, which later will become the standard model. It becomes standard model of particle physics, electroweak theory, QCD, et cetera. But when they worked on Yang-Mills theory, Yang-Mills theory was some small corner of mathematical physics. Nobody cared. So when they invented this method, nobody really paid attention. Nobody really paid attention. But then, in 1971, 't Hooft, who was a 21-year-old, like a graduate student, then he used that, which quantized the non-Abelian gauge theory coupled to Higgs. So before that, people thought that theory was inconsistent. But then he used this path integral to quantize that theory. And so that was a triumph, for which 't Hooft got a Nobel Prize a number of years ago. And when 't Hooft came out using this method, nobody could understand it in US. People like Weinberg, who wrote the electroweak theory, which 't Hooft quantized, he couldn't understand it. And there's only one person, one person who could understand it in the US. He was an assistant professor at Stony Brook. So he started his assistant professor going to Stony Brook, and in Stony Brook, there was a big shot, the Nobel Prize winner, C.N. Yang, who invented this Yang-Mills theory. So he was asking Yang, oh, what should I do for my research, ask for a Nobel Prize winner for advice. And Yang said, maybe you can look at this Faddeev-Popov stuff. And then it turned out later only himself could understand 't Hooft's paper. And then he made it-- of course, he also did some other good stuff. But immediately, he became the only person everybody went to because he really understood the story. Yeah, anyway. Yeah, so it's an actually very interesting story behind this. So first, we have to actually modify this Lorenz gauge condition a little bit. So rather than writing as partial mu A mu, let's try to impose the gauge partial mu A mu equal to some arbitrary function Bx, because partial mu a mu will give some trajectory here. But if I choose some arbitrary function Bx, it will just give you some other trajectory. It's still some trajectory. So what Bx you choose doesn't matter. So you can just replace this delta function partial mu A mu here by delta partial mu A mu Bx. So this is the first non-trivial step. OK? So this will give you some other trajectory. So now, since the Bx does not matter-- OK, since the Bx does not matter, we can actually integrate over Bx. And then the next step is to-- so first you replace that. So this is the first step. And then the next step is that you replace Z by you integrate over Bx with a measure B squared x, and then original Z. So original ZJ. We can now depend on B because I have replaced this condition. And now since this B does not matter, so I can just integrate over B with some arbitrary measure you want. OK? The only thing this does is just give you an infinite constant. Again, we don't care-- infinite constant. OK, so this gives you some infinite constant since this thing is supposed to not be dependent on B. But this, actually, is very useful because the reason we do this is because now, when we do this, the path integral will become the following, become DB DA mu. Now we have delta partial mu A mu minus Bx. And then we have exponential i S, then minus this i xi 1/2 B square. And then I have this i J dot A. So now you see the benefit, because now I have integration over B. But the appears very simply in the delta function. So now I can evaluate the delta function by using the integral of B, by using the integral of B. So now I can just straightforwardly do the B integral. So now I get a new action. i S xi plus i J dot A. And now this i S xi-- so this S xi just given by your original Maxwell, then minus xi over 2 partial mu a mu squared. OK? So this is the reason we put B square here, is that when we evaluate the B integral, then we get something quadratic. And now we have that. And now you see this action is precisely this action we have there. OK, it's precisely that action given there. So we also derive it using path integral. We also derive it using path integral, using this trick. So now we just have this. Now we just have this. And now we can straightforwardly calculate-- now we can straightforwardly calculate this Gaussian integral. One second. Yeah, now we can just straightforwardly calculate this integral. By the way, the reason I say the story here is trivial, it's a little bit of killing a bird with-- it's because for the Maxwell case, this Jacobian is actually trivial. It's a constant. But if you do it for a non-Abelian gauge theory, say, for the electroweak theory or for QCD, this Jacobian is highly non-trivial. And part of this trick is also how to treat this Jacobian, which we don't have to do here. So here we have a much simpler story. So now we can just look at this path integral, which now S xi become invertible, because we already know from here this is invertible. At least for xi equal to 1, given by this, this is invertible. So you can actually now invert. So now we can write this S xi is equal to 1/2 d4x A mu x K mu -- K mu nu xi x minus y A nu y. And this K mu nu-- yeah, K mu nu just whatever you get from that thing, just whatever you get from that thing. And then the K mu mu now is invertible. And then you find this thing just becomes some constant, then exponential i d4x d4y J mu x, then K xi minus 1 mu nu x minus 1 J nu y. OK? And again, this inverse gives you the Feynman function. Again, this gives you the Feynman function. So the K xi minus 1 mu mu-- or mu nu. Yeah, so we're not going into detail here. The story is straightforward. So this should correspond to the Feynman function for the two gauge fields. OK, this should correspond to the Feynman function of these two gauge fields. So now let me just write down the answer. So I urge you to check it yourself explicitly. So you can easily write down what is K mu nu here. And then you need to write down what is K mu nu here, and then you can write down what is the K minus 1. So let me just write down the answer, which we actually will use later for the Feynman diagram calculation. OK, so you can show this given by a momentum space minus i k squared minus i epsilon. Remember, now, here it's massless. And you have eta mu nu minus-- plus xi minus 1 k mu k nu divided by k squared. So you can work out the inverse given by this. And so this can also be written in terms to the minus i k squared plus i epsilon. You can write it in terms of this transverse projector, k mu mu t, then plus xi P mu nu L. So P mu nu L is the longitudinal projector. You just define it by k mu k nu divided by k squared. That's divided by k squared. So this is an answer very simple. So now the interesting thing-- the two interesting things here-- so let me just make some remarks. So first, if you look at xi equal to 1, then this term vanishes, and then you just have eta mu divided by k squared minus i epsilon. So that's exactly what we expect from here. So here, you just essentially have the massless 1 over k square propagator for the massless particle. And then you have eta mu mu from the Lorenz signature. So for eta equal to 1-- xi equal to 1, indeed we covered that. So comment is that when xi is equal to 0. so this is just proportional to the transverse projector. To the transverse projector. But remember, the original Maxwell action, we used xi equal to 0. Actually, that's not invertible. So here, we're actually doing a different order. We first take a nonzero xi. We invert it. And then you can set actually xi equal to 0. OK, that's still works. And then xi equal to 0 still works, because we already fixed the gauge. We already fixed the gauge. So this is the second remark. And then the Wick's theorem for A mu just immediately follows from here. Just as in the case for the scalars and the fermions, it immediately follows. You just contract. We all get these Feynman functions between them. Yes? AUDIENCE: How difficult would it be to show that the path integral approach and the canonical approach are equivalent for the vector? PROFESSOR: Oh, no, it's not difficult. One simple thing-- this one point I'm going to make a little bit later, in a few minutes later, and actually for-- yeah, let me make some comment a little bit later. Yeah, they are completely equivalent. AUDIENCE: It just seems a little not obvious at first glance from the gauge. PROFESSOR: Right, right. Yeah. So for what we want to do, it's actually a little bit simpler, for reasons I'm going to mention. Other questions? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, it's because a projector is just a very simple way to-- so this is just-- so by definition, this is symmetric, in mu nu. This definition is symmetric in mu nu, because under time ordering, you can just exchange them. That means that in terms of Lorentz indices, you must be able to expand it in some symmetric tensors built from k mu. Because k mu is the only vector here. So that means you have to build this symmetric tensor from just eta mu mu and k mu. And the transverse and the longitudinal projectors are just the tensors you can build from k mu. Yes? AUDIENCE: Until we fix the gauge, we still have these parameters xi we still have a family of actions. Is that the residual gauge freedom? PROFESSOR: No. No, that's not the residual gauge freedom. Just so you can show that the physics are independent of xi. Yeah, you can take xi to be any value. OK, good. Other questions? Yes? AUDIENCE: [INAUDIBLE] PROFESSOR: Good, good, good, good. Yeah, so that's the comment I'm going to make later. Yeah, that's coming. That's a good question, which I'm going to make later. Yeah, let me just make this comment and then we'll be clear. I need to make one comment first. So now, for various physical processes which we will consider later, we are interested in the following thing. So now we can just generalize immediately. Now we can just consider an interacting theory of A mu psi and phi. OK, so A mu will be the Maxwell. This will be some Dirac field. This will be some scalar field or even multiples of them. So for any such theory, we know how to compute the vacuum correlation function. Then just what we did before just immediately follows, just D phi then x exponential i S phi, and then D phi exponential i S. OK, so again, I always use omega to denote the vacuum of the interacting theory. And then, again, you can rewrite this as a theory of free theory. Now this becomes, in the free theory, 0 Tx exponential i interacting part 0, and the 0 exponential T SI 0. So now, again, for any interacting theory, we can just use the same procedure, and then we can just do Feynman diagrams. You can just do Feynman diagrams. OK? And then the Feynman rule-- we can just do Feynman diagrams. And the Feynman rules just follow. So now, as you already asked, this sounds a little bit too simple, because we just look at this action-- we just look at this action, but we haven't done what we promised you would do in your Pset. Because by doing this, we say you haven't-- you really have not completely fixed the imposed partial mu A mu yet. So remember, we said that when you have this, that will lead to this kind of equation. You still have to impose the boundary conditions to impose the partial mu A mu equal to 0. But now it seems like we are not doing anything like that. We are not doing anything like that. And also, it seems like we are not fixing the residual gauge freedom. Why don't we do it? So we don't have to do it because, here, we are only interested in calculating the vacuum correlation functions. And the vacuum is automatically a physical state. As you will see in your Pset, the vacuum is automatic physical state. And so it's already a physical state. So in the vacuum, all those unphysical degrees of freedon just automatically will decouple. You don't have to worry about them. And so that's why-- but if you're interested in the excited states, then you have to go through the exactly the same kind of procedure to get rid of-- to do a more complicated thing, as you would do in your Pset. But here, since we are computing just a special class of physical quantities, then just by doing this, it's already enough. You don't have to worry about other subtle stuff. You don't have to worry about other subtle stuff. OK, so I think we are done for the quantizing the Maxwell theory. So now we have all our elements. Yes? AUDIENCE: So when you have like multiple internal vertices, we used to-- the first interacting theory, when we had just expanded the [INAUDIBLE]. PROFESSOR: You do the same thing. You do the same thing, right? You just expand them. AUDIENCE: --each coupling, or like -- PROFESSOR: Just expand S i. Whatever is in S i, you just treat them together. AUDIENCE: I guess, I think, like, earlier in order of whatever [INAUDIBLE] whatever. PROFESSOR: Yeah. AUDIENCE: But now is it order of which-- PROFESSOR: Yeah, yeah. Indeed, so normally, we assume all the coupling S I are of the same order, and then you just expand the S I and then-- yeah. But indeed, you can see the special situation. You may want to expand some couplings, not expand some other couplings. Then that depends on specific situation. Other questions? OK, good. So let's conclude our discussion of quantizing of Maxwell's theory. And so now-- so we have quantized the photon. We have quantized the Maxwell's theory using the Coulomb gauge. Then we see the two-photon degrees of freedom. And then we also did canonical quantization for the Lorenz gauge, which you will finish in your pset. And then we also discussed how to treat it in path integral so that you can calculate these kind of questions, so that we can treat these kind of questions. And so now, the next goal-- now we have all the technicality, all the tools we need. Now we can tackle QED. So now, finally, we can tackle QED. So this is the theory of photon and the electron. This is the theory of photon and electron, and so we can consider the physical process in this theory, which A mu and psi are now interacting with each other. A mu and psi are now interacting with each other. So we will start doing it. I think we are out-- yeah, we only have two minutes left. So maybe we will finish a little bit early today. OK, yeah. So that's all for today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_6_Propagators_and_Green_Functions.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So at the end of last lecture, we start talking about this propagator. So in non-relativistic quantum mechanics, in the Heisenberg picture, we can introduce this position eigenvector corresponding to the eigenvalue of the position operator at time t with eigenvalue x. And then from this object, you can construct a propagator. So this object is powerful if you know how to compute this object, because if you load the function at t prime, and then by-- yeah, by just integrate it with this object, and then you can find your function at a later time. And so this is a powerful object in non-relativistic quantum mechanics. So now, we can ask what are the analogous objects-- say whether they exist analogous object in field theory. But in QFT, there is no natural way to define a localized state such as-- so there's no counterpart of this object. So one reason is that now there's no position operator anymore. So remember, now, x is actually a label-- no longer a operator. It's the phi is the operator. x is just a label. So there's no-- in quantum field theory, there's no position operator. So you cannot-- and then you cannot define-- yeah, so the position operator, then you can-- no natural way to define the eigenvector associated with localized state. And there's also a fundamental barrier-- there's a fundamental contradiction with Lorentz symmetry. So there's also a fundamental barrier-- fundamental obstruction from Lorentz symmetry. So you can show that localized-- you can show a localized state in space is not Lorentz covariant. So this does not-- so suppose-- yeah, you can do this by proof by contradiction. Suppose they exist-- there exist states like some states-- say, x t. So by definition, a local state should satisfy in-- if you evaluate them at the same time, then that should be proportional to delta function because that's by definition what it means what we mean by a localized state-- localized at the point. And that's the property satisfied by this. If you set t prime equal to t, and then this will be a delta function. But now, suppose you can define such a state, but then this notion cannot be covariant defined. It's because this is not a covariant object. So the three-dimensional spatial delta function that does not transform nicely under Lorentz transformation. And so you can show that if you act-- suppose U lambda is the unitary operator generating a Lorentz transformation, which we discussed last time. If you have the momentum-- if you have the generated Lorentz transformation. And then you can show that the U lambda acting on x and t is not the same as the lambda x and the lambda t. So lambda x and lambda T denotes a Lorentz transformation of this vector. Here, I'm using a shorthand notation. So lambda-- x means the Lorentz transformation acts on x, and here, means the Lorentz transformation acts on t. And you actually will do this explicitly in your pset. You will check this explicitly. This does not transform. So this cannot be a covariant concept. So that means that you can most define in some frame, but if you change your frame, and then this state is no longer localized. So this concept is not the Lorentz-- it's not compatible with Lorentz symmetry. So the closest analog you can define in field theory-- closest-- sorry, not closed-- closest analog in QFT in field theory is this object. So let's define an object x could be defined by-- so again, whenever I write something like x, you should view this as a shorthand for the full vector including both space and time. And so this second, define an object by this-- acting-- let's just consider-- suppose we have a real scalar field theory. So for example, let's just consider real scalar field theory. So we can define an object like this. So this is the closest we can get into a position eigenstate. And then-- yeah, its conjugate. Then it's given by phi. So phi is a hermitian, because it's real, so it's a hermitian operator. So now, you can check this object by definition because phi is a scalar field will transforms nicely on the Lorentz transformation. And the vacuum is invariant under Lorentz transformation. And so this object actually transforms nicely. So under Lorentz transformation, x actually it goes to lambda x. So this object actually transforms nicely. But you can also ask, what is the overlap between these two such kind of position eigenstate if we call this quoted position eigenstate? So now let's consider the overlap between two such states. And then this will be given by-- from this definition will be given by 0 phi x and phi x prime on there. So it's given by the expectation value of x times phi x prime between the vacuum states just from the definition. So the reason we say this is the closest analog is because even though this state is Lorentz covariant, but this state is not localized. So if you want a state to be localized, this has to be proportional to delta function when you take the them at the same time. But now you can show-- so this is the object we can compute explicitly, because we already solved the phi completely in terms of a and a dagger. You can just plug-in here to compute this object explicitly, and we will do a towards the end of today's lecture. Here, let me just tell you the results. So you can see that this guy actually is non-zero for t equal to t prime and x not equal to x prime. So that means this is not-- this means that this cannot be a delta function. So when you evaluate them at the same time, if it's a localized state, then it should be proportional to delta function. But this is not the case. So this object is not quite localized, but we will see actually in some sense, it's localized, and we will see in what sense later when we compute this object explicitly. So right now, let's just say the conclusion that this is not localized. So this is not-- so this means this is not a localized state. Good. So heuristically, we can say-- yeah, so let me just make a side remark here. So if we-- so as we say, this is not a position eigenstate. But if we treat it as some kind of approximation to the position eigenstate, and then we can talk about the wave function of a particle. So now let's suppose-- so given a single particle state, psi-- so psi would be say some kind of superposition, which we wrote last time-- so some arbitrary superposition of the single particle state integrate over the momentum. And then we can define its wave function. So I put this function as a quote as psi x equal to just overlap between the psi and this x object. And then this is given by just from that definition-- just given by 0 phi x psi. So we can define a function-- so this is closest analog you can define a wave function for particle in the field theory is to use this object. And now, you can check-- so it's a very simple exercise for you to check yourself. So if I just take this to be the pure momentum state-- say, for example, if I take psi to be the pure momentum state of k-- remember, this notation k is the square root 2 omega k on this state. So this is the pure momentum state. And then if you calculate this object, and then you get-- then k x then just becomes 0 phi x k. Now, you can again-- you know explicitly how phi expressed in terms of a and a dagger, and you know how this is obtained from the vacuum by acting a dagger on it. You can just compute this object. So take a couple of minutes. And you can also just essentially guess the answer. So this just gives you the plane wave-- just gives you an initial plane wave. So with your energy are given by omega k. So you just get the plane wave function. So in this sense, it's like a wave function. It makes sense to call it the wave function. Yes? STUDENT: So in this state here, we call this the wave function. If we take it, and you multiply it by its own complex transpose-- can we interpret it as a probability? PROFESSOR: Yeah, this is the comment I'm going to make now. Yeah, this is a good question. So we cannot really-- so we cannot-- so in general-- so psi x prime, in general, cannot be interpreted as giving the probability of the particle at x. So the reason we cannot do that is because this is not-- because x is not the genuine position eigenstate. And also in quantum field theory, we don't really have this-- we don't really have this proper sense of talking about-- you cannot really-- in general, you cannot separate the single particle with multiple particle exactly, in general, in quantum field theory. In free theory, you can because there is no interaction. But in general, you cannot. So the reason this is a side remark is that this is often used by people, and they just call it a wave function. But you should keep in mind what's the content of this object, and it's not something you can actually rigorously define both mathematically and physically as a wave function. Any questions on this? STUDENT: So could you explain how you got e to the negative i? PROFESSOR: Oh, you just calculate. But it's very easy to see how you get that because remember, phi x is-- phi x can be expanded in terms of a u plus a dagger u star. And the k is obtained by a dagger acting on 0. Essentially, just a and a dagger part, and give you this object, and a part proportional to u, and that's how you get it. Good? So when we compute this object explicitly, you will get a more precise sense what I mean that this is an approximation. You will get a more precise sense. And so this-- I'll now defer that a little bit because I want to talk about something else first. Good. Other questions? So here-- so this actually is a very important object. Forget about this definition. So this object-- yeah, let me just write this object again. So this G plus-- so this object 0 phi x and phi x prime 0-- so we motivated as this overlap between these two localized states-- these two approximate localized states. But in quantum field theory, actually, this is a very important object, and actually, we normally assign a rotation G plus with this object. And so the application of this object, we will not talk too much about it. But in condensed matter-- for example, say in phi, describes some approximate-- some continuum description of a spin system. And then this would be measure the correlation between the spins at different locations-- at different locations and different time. So this is what people call the correlation functions. And so in condensed matter, this actually plays a very important role, and matches the correlation between different physical objects. And so here-- so when x-- for x and x prime, at non-equal time-- in general, phi x and phi x prime do not commute. Remember our commutation relation-- canonical commutation relation is phi-- phi x and phi x prime, they commute when they at equal time. When they are not equal time, in general, they don't have to commute. So the ordering here is actually important. So the ordering of phi in G plus is actually important. So if I define some-- I can consider a lot of objects with a different ordering. Say if I put the phi x prime first, I can also consider this object for the phi x prime first. So this is normally called-- now, this is a different function in general because they don't commute and we call it G minus. So this is also appropriately just x prime x. So this is the x prime x and this is using that notation is x prime. And so in general, G plus and G minus, they are not the same. So you can also define some other functions-- say since the phi and phi x and phi x prime don't commute, then you say-- then we can consider more general orderings. We can consider the superposition of these two. So you can consider the retarded function so-called, which is defined to be the theta t minus t prime-- the commutator of phi x and phi x prime. So this is some other object you can define, which is the difference between the two, but it only takes value when t is greater than t prime. So when t is smaller than t prime, then it's 0. So this is another other object we often use. And so this often is denoted by theta t minus t prime, and call this object-- this commutator to be delta t, t prime. So sometimes, people also use the following object-- the G A equal to minus t, t prime minus t. So only it's non-zero when t prime is greater than t and the delta. Oh, sorry. Here, it should be x, x prime. And finally, you can define an object called G F which is half-- so this is called the retarded. This is called advanced, which is like half of the retarded and half advanced. So when t greater than t prime, you take the G plus. And when t is greater than-- t prime greater than t-- let me do it here-- you do G minus. Again, this G F is a function of x and x prime. So because they don't commute, so these are various objects you can define. So one second. So at the moment, they may seem not very intuitive to you-- why should we be worried about those objects. And later, you will see actually some of those objects play a very important role, and some of these objects will play a very important role. Yeah, for now, they're just some definitions which we will use potentially later. Yes? STUDENT: So considering spins on a lattice, would x and x prime be the positions of the spins on the lattice? So why would it physically differ one I define which-- it seems arbitrary to switch them right? PROFESSOR: Yeah, if you do them at equal time, then it doesn't matter. But remember, x and x prime, they don't have the same time, and if they're not at the same time-- and then actually, sometimes, the ordering matters. Yeah, whether you do this measurement first, and then you wait a while to do that measurement, or you do this measurement first and then-- yeah, then that can be different. Yeah? STUDENT: So G minus is x' inner product with x? PROFESSOR: Oh, yeah. STUDENT: G plus just uh, wouldn't it just be-- PROFESSOR: Yeah, just they're switched. Yeah, remember this order is important. So here, I have x first and x prime second. So this order is different-- is important. STUDENT: But is G minus then just the complex congugate of G plus? PROFESSOR: Yeah, it is the complex congugate of G plus. Other questions? Yeah? STUDENT: Yeah, so like the retarded and advanced ones aren't like Lorentz covariant because of the theta function? PROFESSOR: Yeah, we will talk about that. Yeah, they are actually. Yes? STUDENT: Can you go back and explain more about why the non-equal time non-equal position operators don't commute? PROFESSOR: Yeah, it just say our canonical-- yeah, our canonical commutation relation only requires them to commute at the equal time. And when they are not equal time, and each of them will evolve. And remember, the phi itself is-- phi itself depends on time. Yeah, so let me just-- if I just write the-- yeah, let me just write the-- so this is the k, say a k u k plus a k dagger u k dagger u k star. So this is my expression for phi. So canonical commutation relation only requires that they are equal when t equal to t prime. But when t is not equal to t prime, I just have some general expression. I do the commutator-- does not guarantee you will get 0. Yeah, you just get-- yeah, you have different terms. They may not cancel each other. So at t equal to 0, we guarantee that they cancel each other, but when t is not equal to 0, then not guaranteed to cancel each other. But we will figure out when they cancel, will not cancel. Yeah, we will talk about that more explicitly. Yes? STUDENT: Is it correct that if the point are spacelike separated then we can rotate so that t -- to a frame where t --? PROFESSOR: Yeah, we will talk about it. Yeah-- STUDENT: [INAUDIBLE] PROFESSOR: Yeah, we will talk about that. Good? So let me just emphasize again, this is something in principle you can define. And it's phys-- some of its meaning will be clear later. And in the next pset-- not in this pset-- in the next p set, you will see an example, which this thing actually play a very important role. So this is the analog of the retarded function in E and M? Do you still remember retarded function E and M? And yeah, so this is quantum version of this retarded function in E and M. And you will see application of this in the pset four. And later, we will use this thing all the time. So this G F, we will use it all the time when we consider interactions, and so that's why we want to introduce it here. And this G plus G minus are called the Wightman functions. So sometimes, called the Wightman function. The G F is called the Feynman function. So Feynman was the first person who introduced it. And yeah, so G plus G minus was introduced by other people, but Wightman made many important use of it in the early days of quantum field theory, so it's called the Wightman function. So now, let's just study a little bit the properties of those functions. So you can also think these are essentially the simplest observables of the system. And in particular, if you think of them as correlation functions, and then that just tells you the correlation between the phi at a different point. So this is in some sense, also the simplest observables in your theory. Good? Do you have other questions? Yes? STUDENT: So this is the correlation for the vacuum state, right? PROFESSOR: Yeah, this is the correlation vacuum state. STUDENT: So you have a different state? PROFESSOR: Yeah, if you're not in a vacuum state, of course, you get a different function. STUDENT: So is there a way to measure for other states? PROFESSOR: Yeah, you can compute. So in this theory, we can compute in any state. But just we often use -- we probe in the vacuum state. Other questions? Yeah, for example, in the lab, if you can prepare in some whatever state you prepare, you calculate this object in that state, but the vacuum is the simplest one. It's the simplest. Yes? STUDENT: [INAUDIBLE] we can, we can define a useful localized state in space and [INAUDIBLE] other [? state ?] defined is also [INAUDIBLE],, spacetime is useful [INAUDIBLE]?? PROFESSOR: No, it's not quite in spacetime. We will see in what sense it's approximate to the local in space. Good. Now, let's just discuss the property of those objects. And so the first thing is that the-- remember the phi satisfy the following equation. So this is the operating equation satisfy phi. And so if you act this on the-- so suppose this is-- so here, you have two arg-- here, you have two arguments, x and x prime. So here, when we write differential operator, we always refer to x. It's always derivative of x. So we must satisfy this equation. So if you act this on the G plus and the G minus, you immediately get 0. So you find-- because of [INAUDIBLE] equation, then G plus/minus are just x and x prime immediately gives you 0 just because phi satisfies this equation. Oh, sorry, it should be a minus sign, I think. And similarly, with this-- similarly with delta-- so the delta is the difference between the two. It's the commutator. So the delta is essentially the difference between G plus and G minus, so the same thing with delta. So this is-- the first property is that they satisfy the same equation as the classical equations of phi. So the G plus and delta, they satisfy. And we will talk-- and then you can now also look at the G F and the G R. So now, if you act this object on G R, G A, or G F, the story is a little bit tricky because the partial square containing time derivatives. And when you add the time derivatives-- and time derivatives do act on those theta functions. And when you add time derivative on theta function, you will get some delta functions, et cetera. And so the story is a little bit more intricate. But if you just carry it through-- it just carries through-- then you can show. And so this is a simple exercise you should try to show yourself-- we'll not do it here-- that you find, you act this on the G R A, and F. You get actually the-- any of them, you get the same equation. You get the right-hand side becomes-- it's non-zero-- actually gives you a delta function. So this is a four-dimensional delta function. So in both in spatial location and in time. So heuristically, you can understand that this delta function in time comes from taking derivative on theta functions and then the derivative in the spatial direction-- oh, delta function in the spatial direction can come out in the following way. So here, you have two-- so here, you have two time derivatives right in the partial square. So imagine you have one time derivative acting on the theta function, and then give you a delta function in time. And then if you have one time derivative acting on this phi, and then taking it into pi. And then equal time commutator then give you a delta function in the spatial direction, and so that's how you get that. Yeah, but it's easy to check yourself. So this is the first property-- is that they satisfy those nice equations. The second property is that phi x and phi x prime, even though in general they don't commute, but they actually commute for space-like separations. So that means that if you look at the commutator of phi x and phi x prime, this is actually equal to 0 for x minus x prime greater than square greater than 0 for space-like separations. So this can be checked by expression computation. So we know the mode expression of phi. You can just do it commutator-- yeah, you can just check it. But you can also-- but there's also a simple argument to make it. So you can do-- you can check by expression computation. So here, I give you alternative argument. Here, I give you an alternative derivation. So yeah, it's very simple, but let me just divide it into three small steps. So essentially, by definition-- so let's just consider-- yeah, we consider the commutator, which here we call to be delta. So the delta x plus/minus-- oh, x and x prime, which is the commutator. Yeah, let me define-- yeah, sorry-- yeah, so delta here is the expectation value of 0, so let me call this delta hat. So the delta hat is defined to be the commutator. So the first claim is that this is actually a C number. So this is C number, and the reason is very simple. So just some constant-- even if it's non-zero can, at the most, be a constant. Be some-- yeah, but C number means that it's not an operator. So the reason this is C number is because if you look at the phi, it's linear in a and a dagger. If you look at the commutator between phi and phi itself, you always just get commuted between a and a dagger. a and a dagger just give you a constant, and so this just-- you get some C number. So in fact-- so this delta hat is actually the same as delta because the, if your C number, you take this expectation value doesn't do anything. So the second step is that now let's consider the Lorentz transformation-- the operator, which generates the Lorentz transformation, which is given by-- remember, we discussed last time. So this is the generator. So this is the unitary-- so this is operator of Lorentz transformation. Let's look at this object. And then by definition-- so this object acting on phi x dagger will give you phi prime x. And phi prime x transforms phi under the Lorentz transformation, and this is the same as you just do an inverse Lorentz transform on the coordinates. So this is the-- so by the way, just-- do you feel comfortable about this equation? Do you know where this equation comes from? So you're all comfortable with this? OK, good. So by definition, U acts on phi as this. So now, let's just act both-- act U and U lambda dagger on both sides of this equation. And then we get-- so on the right-hand side-- so this is a unitary operator -- because the M is Hermitian, this is a unitary operator. So the right-hand side, you just go back to this C number. It does not change. So the equation does not change. And the left-hand side-- so the delta is the same. So you find that the-- yeah, so you find the delta x prime, which is equal to this C number. Then, it's the same as U Lambda phi x phi x prime and the U lambda dagger. And that gives you-- me phi lambda minus 1 x commutator with phi lambda 1-- minus 1 x prime. So that's the same as delta lambda x and the lambda x prime. Good? So now, we can look at the last step. So you see, when we make a Lorentz transformation on delta, actually nothing changes, and the value is the same because the commutator is a C number. And then-- now remember for any space-like separated x and x prime, we can always choose to find the some lambda-- some Lorentz transformation-- that the lambda x and the lambda x prime to be at the equal time. They have the same time. We can always make a transformation. Yeah, and let me call it something else-- t tilde. So your original x is for-- you can always transform it to equal time. And then according to this one, then will be identically 0. And then we find delta x and x prime equal to 0, because now, this is evaluated at equal time. So this is very intuitive. When you make a Lorentz transformation, you can-- yeah-- but this is a precise proof of the statement. Any questions on this? Yes? STUDENT: What's the x on [INAUDIBLE]?? PROFESSOR: Sorry? STUDENT: [INAUDIBLE] PROFESSOR: Yeah, for x minus x prime squared greater than 0. So this is a four distance square. You're asking about this notation? Yeah, this is just-- there are four distance square-- the separation between them. Yes? STUDENT: I have a question on the first property. So is that equation up in the corner, so that defines the-- it says that G is like the Green's function for your Klein-Gordon operator? PROFESSOR: Yeah. STUDENT: So why does that-- is there an intuition for why the correlator in the vacuum state gives you the Green's function? Because then you could use this to get any classical solution right? PROFESSOR: Yeah, so the intuition is that essentially-- yeah, so for example, this is just how we-- yeah, first is that this kind of thing is simple enough. Essentially, you can just see by observation, but in a more complicated theory. And you can actually derive it-- and the retarded Green function always have this form. Yeah, so that's a thorough more elaborate derivation. STUDENT: When you're talking about classical field theory, usually solving the Green's function is kind of hard and you have to expand-- like, here, we quantize it and then got out-- PROFESSOR: Yeah, this is different. So classically, we just define those functions using those equations. So here, we just say-- in quantum field theory, the object can be written this way. Yeah, those objects can be written this way. And this definition just follows from the standard-- yeah, if you do quantum mechanics and you-- a retarded function can always be written like this. Yeah, this is also just in quantum mechanics, but it requires a little bit of calculation to see that. Other questions? Yes? STUDENT: Should it be lambda minus 1? PROFESSOR: Oh, yeah. Sorry. Yeah, it's-- lambda minus 1. Yeah, so here also should be lambda minus 1. Other questions? OK, good. Yeah, so this also shows-- yeah, so when we talk about the canonical quantization-- so we mentioned that we need to impose this equal time commutator to be 0. That's our canonical commutation. Yeah, we impose the canonical commutation relation at a single time. And some of you were asking that actually-- does that actually break Lorentz symmetry because we have to choose a single time? But now, you can see actually this does not break Lorentz symmetry because the only space-like separated, and the commutator is always the same. So no matter what frame you choose and the-- in any frame, if it's equal time, and it's always space-like separated in some other frame. Good? So that means-- so immediate conclusion-- it means that for space-like separated, x and x prime G plus equal to G minus is equal to G F. So the time ordering does not matter, so the G plus-- then you can do G minus. And then the G F equal to the sum of them. And then when-- here, it's for t greater than t prime. Here, it's for t prime greater than t. And then this should just add equal to 1 because these two become the same. And the G R and the G A equal to delta equal to 0 for space-like surface case. So those functions are pretty simple. So the-- yeah. So now-- so the last property-- now, you can show due to the spacetime translation symmetry and the Lorentz symmetry of the vacuum-- of the vacuum state. So you can show-- I think this will be-- this is a pretty simple argument, but I will leave it to show yourself. You can show that any of those functions only depend on the difference between of them-- and in particular, only depend on the four distance between them. And this G here can be any of the G R, G A, G plus/minus, G F, et cetera. So all of them have very nice properties. Even naively depend on two arguments. So x have four components. x prime have four components. So naively, this is a function of eight variables, but this tells you once you use all the symmetries, this is actually a function only a single variable. This is a very powerful statement. So they have very-- yeah, so this answers one of the question some-- one of you asked earlier-- that the-- despite the theta functions, you can show that they still have very nice properties under Lorentz transformation. You can show they have a nice properties under Lorentz transformation. Good. Other questions? Yes? STUDENT: [INAUDIBLE] PROFESSOR: Yeah, so this is very easy. When you have a translation symmetry, and then the reference point you choose is not-- you can choose any point to be the reference point. And so you can just choose x, and then only the separation between x and the x prime matters, because you can just choose x prime, for example, to be at the origin-- and then only the separation. And then the Lorentz symmetry tells you that only the distance matters. It doesn't matter the direction. Yes? STUDENT: Does the commutator have this property too, or is it just G? PROFESSOR: Yeah, the commutator also have this property. Yeah, the delta also have this property. Yes? STUDENT: Is this only true when x and x' are spacelike separated? PROFESSOR: No, this is true in general. This is always true, but it's only in the vacuum state. So the key is that you have to use that the vacuum state are invariant under those transformations. Yeah, it's a-- once you understand how to do it, it's a very simple argument, so that's why I will leave it into one of the exercises. Other questions? OK, good. So now, let's compute this object G plus. We have been talking about it. And now, finally, after discussing all these general properties-- so let's compute them explicitly. So let's just first compute the G plus. Did I just erase the definition of G plus? Yeah. So this is easy to do. So those properties you can just get by symmetries. Of course, you can also get by doing explicit calculations, but you can get them without doing any explicit calculation just based on symmetries. So here, we do-- calculate this object explicitly, we will see-- satisfy those properties. So we just put in the expansion of phi and phi x and the phi x prime. And then-- yeah, then we have the 0, and then we have a k u k evaluated at x plus a k dagger u k star evaluated at x times a k prime u k prime evaluated at x prime plus a k prime dagger u k prime star at x prime, and then acting on 0. So this decay with 2 omega k, and this essentially gives you phi. And the other one gives you the phi-- so this gives you the phi x, and this one gives you the phi x prime. I just calculated this. So now, this is easy to do. This is just now become an exercise in harmonic oscillator. So only this term-- multiply this term. It's non-zero. The rest of terms are 0, because this will annihilate the vacuum, and this 2a dagger together will annihilate that vacuum. So the only term not vanishing is this term combined with this term. So you just get-- and there's a delta function when you evaluate them between 0. And so you find-- And then you have -- t minus t prime. So this is the object. So we can do this integral further, and then we also rewrite this in a slightly different form. So you can also write this as integrating-- so here, omega k is on shell, so I can also write it in a more general form-- in the form like this-- 2 pi theta k 0 delta k squared plus m square exponential i k dot minus x. So in this form, the k is unconstrained, but I have a delta function to enforce it on shell. And then I put the theta k 0 to make it to be positive. And then you can see that this will lead to that. You can see that. And similarly, you can find-- similarly, you can find the integral expressions for G R, G A, and G F. I will not give the expression here. Yeah, we will-- yeah, you can easily find them yourself. So let me just make one remark. So you can now actually calculate this object. So now, let's calculate this object. Remember earlier, we say this object is the overlap between x and x prime. So let's see in what sense this is like a delta function. In this sense, it's like a delta function when we say them to be equal time. So now, note-- so let's consider the equal time t equal to t prime. So for t equal to t prime-- so the integral becomes following-- G plus x and x prime just equal to-- just become this. And this integral clearly is non-zero. So that's why we said earlier, this is not a localized state. So this is equal to this x and x prime. So this is non-zero as equal to, and so this is not a localized state. Yeah, so this is not proportional to a delta function if the thing is here. If you don't have this factor, then it's a delta function, but this one, it's not. But you can nevertheless you can calculate this integral exactly. And also just based on the here-- based on the spherical symmetry-- rotational symmetry, you can immediately see that only depend on the distance between them. Only-- so this G plus will only be a G plus a function of r, and r is the distance between the-- yeah, just based on symmetry, you can easily convince yourself. And this only depends on the distance between them just from the symmetry of the integral. Yes? STUDENT: So when we added the 2 omega k [INAUDIBLE]?? PROFESSOR: Yeah, it's for the-- for example, for calculating the-- you wan a, a dagger equal to 1. Yeah, because that's the normalization for the a dagger. Yeah, that's a good question. I forgot to mention that. Other questions? So this object at t equal to 0 only depends on the distance between them, because this is a spherical integral because this one only depends on the magnitude of k. And then this is just a standard Fourier transform. So now, you can just evaluate this integral explicitly. I urge you to do it yourself. And then you can show this is actually proportional-- you can actually evaluate this using Bessel function. So this gives you a modified Bessel function-- K0 times m r. So do any of you-- are any of you expert of Bessel function? So do you know the behavior of this function for when r is large. STUDENT: Goes to 0. PROFESSOR: Yeah, it does to 0. That's a good intuition, but it goes to 0 in what way? Actually, exponentially. So it actually goes to exponential minus mr, so for r large. For m r greater than 1. Now, you can see in what sense this is approximately localized state. You see this is not 0, but this is pretty small at large distance. So at distance r, it's much greater than 1 over m. So when r is much greater than 1 of m-- so that object is very small, so this exponential is small. So this tells you that even though this is not a perfectly localized state, this is localized to the distance of 1 over m. So if these two points are separated more than 1 over m, and then the overlap becomes very small. And so this is-- yeah, so sometimes, we call this xi. And then can in condensed matter-- language which, when you interpret this as a correlation function-- which the correlation between the x and the phi x prime-- and then this is also called the correlation length. So when the phi-- the distance beyond 1 over m-- or beyond this xi and then the phi no longer correlated. So this a sense-- this is approximately localized object, and it's essentially the Compton wavelength of the particle. It's the Compton wavelength of the particle. And once you go outside the Compton wavelength of the particle, and then the overlap becomes very small. So this makes perfect physical sense. It makes perfect physical sense. There was a hand? STUDENT: Yeah, I had a question, but then I realized that my thinking was wrong. PROFESSOR: OK, good. Yeah, so this is just [INAUDIBLE] of what we said earlier. So you can similarly get such kind of integral expression for those things. But rather than-- the coordinate space expression is actually often-- will be often-- actually lead to the expression momentum space. So it often leads there. So often-- in the future, we'll often lead the expressions momentum space. So we can just consider the Fourier transform. So here is the convention of the Fourier transform. So dx-- so any object or function of x-- again, this is a four vector, and the minus i k x. So I use the same notation, G, to denote its Fourier transform, and distinguish them only by k and x because it's just annoying to always put some tilde above it. And the inverse transformation would be the G x x prime. So you see here explicitly-- yeah, there, we already see explicitly that is only the function of their separations. And then if you use the symmetry here further, then you see the distances. So we can also just write this in terms of the-- this is i k x x minus x prime G k. So this is our convention for the Fourier transform. So from here, if you just look at this expression and compare with this expression, and then immediately we conclude for this Whiteman function G plus, k is just given by 2 pi theta k 0 delta function k squared plus m squared. So this is the expression for the G plus. Yes? STUDENT: So this localization is [INAUDIBLE],, which is greater for higher momentum, so is this a manifestation of the uncertainty principle? PROFESSOR: Yeah, this is a-- so here, this is, in some sense, the uncertainty principle. This-- here, we just consider in the vacuum state, so there's not-- the state itself does not have any energy. And so this is just the Compton wavelength of the particle itself. And it's just like a static Compton wavelength. Other questions? Yes? STUDENT: If this is the vacuum state, then what does it mean to be [INAUDIBLE]? PROFESSOR: Yeah, but still you have fluctuations. This is quantum mechanics. Yeah, for example, you can calculate an analogous object for the harmonic oscillator. So if you have a harmonic oscillator, the analogous object is this object, and you can also calculate this non-zero just because of the fluctuations. Other questions? OK, good. So this is the expression for the G plus. And for those object, we can get a momentum space expression by going through the same procedure. Just plug them in, labor through, and then write the final answer in this four momentum integral form, and then you just read the answer. For them, we can do the same thing. But it's actually much easier. Instead of going through that procedure to calculate those quantities, it's actually much easier to start from here, because they satisfy these equations. So let me call this star. So to find G F k the same with G R, A k, it's actually easier to use star. Unfortunately-- so now, if you look at this equation-- if we look at this equation, then let's use Fourier transform on both sides. We do a Fourier transform on both sides. So the left-hand side, the partial-- each derivative just gives you a k. So this is the standard rule for Fourier transform. And so here, you just get k squared plus m squared. And let me just call it G, so denote any of them, equal to minus i. Because the right-hand side, when you do a Fourier transform, it just becomes one. It's a delta function. So now, we can immediately write down the expression in momentum space. But now, you see we have a problem. So what problem do we have? Yes? STUDENT: When you have k squared equals negative m squared it's like a singularity. PROFESSOR: Good. Well, we have a lot of problems. So this is one of the problems. We have another problem. Yes? STUDENT: Is this on shell? PROFESSOR: Hmm? STUDENT: [INAUDIBLE] PROFESSOR: So k is not on shell, because when we do a Fourier transform, it's for the general k. Yeah, it's defined for the general k. Indeed, for the on shell value of k, then you will have a singular behavior. But for the-- but we still have another problem. Yes? STUDENT: They're not supposed to be complex? PROFESSOR: Yeah, it is complex. It's OK. Yeah, that aspect is OK. Yeah, so since-- [INAUDIBLE] So you see here, we have defined the three different functions, but there, how many solutions we have? We only have one. We have unique solution seems like. Then we have a problem. It seems like-- that seems to say all these three are the same. But if you calculate them going through this theta function [INAUDIBLE],, they are not the same. So it turns out these two problems are related. It turns out these two problems are related because now, if you consider the coordinate space expression, which obtained by the Fourier transform of this guy-- again, by Fourier transform of this guy-- so in the-- so if you write down explicitly 1 over k squared plus-- so 1 over k squared plus m squared, if we write it explicitly. So this is 1 over minus omega squared plus k squared plus m squared-- write it explicitly. And then this is equal to minus omega squared plus omega k squared. So this is just the omega k squared. So as you said, this actually has a singularity when omega equal to plus minus omega k. When you satisfy this on shell condition. But this singularity is actually along the integration contour of omega. Because in these four integrals is given by d omega dk. So the omega-- so this from minus infinity to plus infinity. And these two singularities are the real value of omega. So they actually-- so if you look at omega axis-- so at these two points, actually, they become singular. The integrand is singular. So the integral is actually not well-defined. So the integral is not well-defined. So in order to define the integral-- in order to define this omega integral, then we have to do your standard trick in complex analysis. So what is that trick? Yes? STUDENT: [INAUDIBLE] PROFESSOR: That's right. You go around the singularity. So now, you go to the complex plane-- complex omega plane. So now this is a real omega. This is imaginary omega. You go to the complex omega plane. Now, the integration constant is along the real axis. And now, you have four different choices. You can either go up or going down for each one of them. And now you have four different choices. And this 3 equals 1 into three choices of them. Then there's a fourth one, which is not frequently used, so we normally don't give them a name. And the fourth one, sometimes, we call them G F tilde. It's just-- anyway, so there are four different choices of going around the singularity, and then that gives you four possible solutions. And then that gives you the-- yeah, so now let's talk about the G R-- do them one by one. So for G R-- so remember, G R is defined to be when theta t greater than t prime, G R should be proportional to t, to t minus t prime. So that means this should be 0 for t minus t prime smaller than 0. This should be 0 for t minus t prime smaller than 0. So how do we achieve that by going around the poles? So remember, in this integral, it's a peice minus-- the omega dependent piece is minus i omega t minus t prime. So when t minus t prime smaller than 0. If you want to do this integral using contour integration, then you can close-- so this is smaller than 0, and then this is positive. And then you can close the contour in the upper half plane. You can close the contour in the upper half plane. In order for this to be identically 0, you need your integration-- counter don't enclose any singularity. So that means for the retarded, you need to going around the singularity along which direction? We only have one minute left. [LAUGHTER] For it to be 0, when you integrate-- so you want to close in the upper half plane. You want to close it because the t minus t prime is smaller than 0, and then this is positive i, and the omega in the upper half plane. This is a decay exponential. And so you wonder, there's no singularity inside the contour, and then you want to go around the singularity this way. Going above the singularity. When you're going above the singularity, and then this contour don't include-- and then within this contour-- and then there's no singularity. Then, this is identically 0 from the Cauchy theorem. So it tells you that for the retarded Green function, you need the contour like that. So similarly, for the advanced-- because the advanced, we just change the direction. So this is for the retarded. For the advanced, it's proportional to t minus t prime. Then you just go in the opposite direction. So this is for the retarded-- for the advanced. And then, finally, if we want to do it for the Feynman-- and then you choose one of them going up and one of them going down. I always forget which one going up, which one going down. Yeah, so actually, this one you go down, and this one you go up. So this is for the Feynman for the G F. That's all for today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_12_More_on_Perturbation_Theory_and_Feynman_Diagrams.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: So again, let us first recall say if we want to calculate the n-point function in the interacting theory. And this is given by this following ratio in the free theory. You look at this T, then you look at this. Again, take this product to be X, take the T X, exponential i S_I, then in the free theory, and then divide it just by time order i S_I, in the free theory. So then what we do is we just expand these two exponentials in power series. We expand this exponential in power series. And so when you expand in power series, then, for example the nth order term in the upstairs, say upstairs, then have the following structure. You have i to the power n, n factorial. Then you have T X then S_I to the power to n S_I. And the nth order for the downstairs, you can also expand the downstairs. So in the downstairs similarly then you have almost identical structure, except you don't have this X. So you have T, then you just have n S_I's. So there's n of them. So there are n of them. And the last time we described how to compute say a typical terms in here. Let me call this equation star, equation star star. So last time we described the star and the star star can be computed using Wick's theorem. But in practice what we do is that we first draw Feynman diagrams and then we convert the Feynman diagrams into analytic expressions. Normally we go through this procedure, rather than directly do the Wick contraction. We mentioned the-- And the second line, of course, can be considered as a special case of the first line, with this x to be identity. This x to be identity. So any diagram contributing to star because the-- Sorry I should actually here I have n, let me call it m. I should call it m so that they don't-- because we already have n here. So mth order. Any diagram contributing to the star has n external legs. So that's come just from this x, from this phi x1 and the phi xn. And then also with m vertices because each vertex comes from a power of S_I. But for star star, diagrams contributing to star star have no external legs because there's no x here, there's no phi here. There's no external legs. So that's why they're all called vacuum diagrams. They're called the vacuum diagrams. The downstairs can be understood as in the case which you just essentially take X equal to 1. So essentially we are calculating some kind of unnormalized transition. This is the overlap between the vacuum itself from minus infinity to plus infinity. In Pset 6, just in the Pset we just posted last night, you can show that this quantity, which comes from summing over all possible vacuum diagrams, the diagrams without external legs-- Sorry, I should say have no external legs. This no is very important. Yeah, has no external legs and so they are called vacuum diagrams. In Pset 6 you will show yourself. It's a pretty simple calculation, but it's an instructive one. You can show that this quantity can be used to calculate the vacuum energy of the interacting theory. Of course, this is divergent, but just as the vacuum energy we calculated before in free theory, this one can be used to calculate the vacuum energy of the interacting theory. Of course, this quantity you will see its divergent, but this is the formal way to calculate it. And how to remove the divergences etc. is something we will not discuss in this class. It will be in QFT 2. Any questions on this? OK, good. Let me say a few more words. When we compute two, there we say they can be computed using the Feynman diagram, so let me be a little bit more explicit. The computation of star or star star essentially involves two steps. Just elaborate that remark in the bracket. Is that we first draw all possible -- all inequivalent Feynman diagrams. In this case, at each mth order you have an n points, n external points, and then you have m vertices. There you have say for star you just have n external points, say x1 x2 in coordinate space. You have n external points and then you have m vertices. And at each vertex you have-- let's call it y1, y2, ym-- and at each vertex you have four legs coming out. At each external point you have one line come out, and then you just need to find all possible ways to connect them. That is what we mean by joining all inequivalent diagrams. This -- you just draw it, it becomes mechanical. Yes? AUDIENCE: Why do the vertices have to all have four legs? HONG LIU: More specifically here we're talking about the S_I equals to minus lambda 4 factorial. We can see that this particular theory. I'm going to make remarks on other more general theories, but here it's more specific. Other questions? So this is for star. So for star star, then you just don't have the x, you just have y's. You just have m y's and each y has four legs. You just connect all of them together. You connect all of them together. In the star star, in the vacuum diagrams, it will be always closed because there's no external leg. Just because all the legs will be contracted, and so it will be a closed diagram. Sometimes it's also called the vacuum bubble. Sometimes we call them vacuum diagrams or vacuum bubbles. Here just know it's always closed diagrams. And the second step is that now once you draw all the inequivalent diagrams, then you can just convert each diagram to an algebraic expression using Feynman rules. Then you just sum all of them together. Then you sum all of them together. And then we talked about the Feynman rules last time. There's coordinate space rules, there's momentum space rules. You can do it in coordinate space, you can do it in momentum space. I will not repeat them here to save time. Once we finish the second step, then we just have a bunch of analytic expressions and then you just need to do the integrals. So a bunch of integrals and you just need to do the integrals. Any questions? Good. Some further remarks on 1. Further remarks on this first step. Here I consider a very special case. Here I consider a very special case corresponding to this S_I to consist of a single term, phi 4. In principle, of course, S_I can contain multiple terms. Can contain multiple terms. It can contain multiple terms, then you will have multiple types of vertices. But the rule is similar. You just say suppose S_I contains two terms, then you just multiply them. Then you'll get many terms here. Just corresponding to different combinations of vertices for different types of vertices. But the rule is exactly the same, just now your diagrams are more complicated. But essentially otherwise it's identical. Another thing to mention is in principle we can also have multiple fields. So here we can see only a single scalar field, phi, but you can also, in principle, consider more than one field. So if there are multiple fields, the story is, again, very much parallel, and the only difference is that each field, each propagator, is represented by a different line. Other than that, these two steps again apply. Just now you actually have different type of lines. For example, if you consider a theory with two scalar fields, phi 1 phi 2 with S_I proportional to phi 1 squared phi 2 squared, and then in this case you will have a propagator for phi 1, then you also have a propagator for phi 2. Now I can use a dashed line. Then the interacting vertex then will have the following structure. You have two solid lines and the two dashed lines. Now that's your vertex. Again, you just contract all possible lines according to your Feynman rules. Good? Also one remark I want to emphasize is that the momentum space-- also just another point to stress-- this is obtained by Fourier transform Gn x1 xn times the delta function. This times this gives you the Fourier transform of that. When you do the Fourier transform, p1 and pn, they just come from doing Fourier transform corresponding to momentum for x1, xn. So here the p1 and pn are just arbitrary. They don't have to satisfy any on-shell condition, they're just arbitrary momentum, and the only constraint is that this is non-vanishing only when sum of them is equal to 0, otherwise they can be arbitrary. Good? So in practice, now we have a way to calculate the diagram for both upstairs and downstairs, and now we just do the expansion. We just do the expansion. Now let me summarize the example we did last time now just only using diagrams. So for example, if we consider the two point function, let's consider the two point function. We have upstairs and downstairs. The upstairs, the lowest order you just have a single line. It just has a single line-- x1 to x2. To save effort we'll also not label x1, x2 from now on. The next order is to bring down one power of this S_I. So you bring down one power of these four indices. Now there are two possibilities. One possibility is that you still have this, but then times this vertex contracts itself. And then another possibility is that you have something like this. And you contract with the vertex, so this is the plus lambda squared. This is to lambda order. And downstairs you have 1 and then we just essentially have a single S_I. A single S_I, the only contraction is just given by this. And then plus order lambda squared. Plus order lambda squared. So now each vertex carry-- so you should remember, each vertex carries a parameter lambda, a factor of lambda, which is considered to be small. Downstairs you have 1 plus order lambda, and in the downstairs then we have 1 plus this power series of lambda, and then we can do the Taylor expansion again to bring them to the upstairs. Remember we did before. Now if you do that, then at leading order to lambda, then we still just have x1, x2 because this divided by that just x1, x2, and then you have this term. But now we can bring this term to the upstairs and then this sign changes to minus sign. This term can multiply that and then you get the term like that. Coming from bringing this term to upstairs and then you have this term. And the rest, they're all order lambda squared. So you can also have this change to minus sign, multiply that, but that will be order lambda squared. And since here we already neglected lambda squared and here neglect is lambda squared, so we don't worry about it. And if do this and multiply this, also gives you lambda squared. Up to order lambda and that you have. And then as we discussed before, these two cancel. So in the end you just have two diagrams, just one and then this one. Yes? AUDIENCE: If we have two fields with a vertex with like one solid line, one dashed line. Here, you can read it as the two different particles interacting with each other or the same species interacting with each other if you read it like that, but you lose that if you-- HONG LIU: For here it doesn't matter. In this case, it doesn't matter how you draw them. You mean if I exchange them? AUDIENCE: Yeah. HONG LIU: Yeah, it doesn't matter. AUDIENCE: But in that picture you can't read-- HONG LIU: Yeah, normally you cannot view up and up and down. Left and right here does not mean anything. The order is not important. But actually, in QFT 2, you will encounter situations which these are matrices and then the order becomes important. But here they just fields because they commute. So no matter how you order them doesn't matter. They're just ordinary fields. This order, we just have these two diagrams. This is much simpler than we did at the first time. Much simpler than we did the first time and the calculation is much easier. Let me just show you the cancellation in a slightly different way, almost identical, but I write slightly differently. Alternatively, I can also write as this. I can also write G2. We look at these two terms. We can just take a common factor out. So the upstairs, the first two terms I can also write it as this times 1 plus this thing and then plus this. And downstairs I just have plus lambda squared. And downstairs I just have 1. Now you can just look at these two. You can directly see, actually, these two can be canceled. You don't have to worry about this one because when these multiplied by that, you get higher orders. Now you just directly also get from here. The reason I'm doing these in these two different ways is that this cancellation is actually not an accident. Such cancellation, this is not an accident. This is not accident. This actually happens to all orders. This happens to all orders, so you can actually show. If I call the upstairs, now let's call the upstairs to be Un and this to be U0. Un is just the upstairs U0 is the downstairs. The Un, by definition, corresponding to sum over all Feynman diagrams with n external legs. And U0, by definition, is sum of all Feynman diagrams with zero external legs. With zero external legs. So then you can show in general which the Gn is just equal to, as we wrote there, just Un over U0. You can show that this defined to be Un over U0. You find order, you can show to all orders this downstairs can always, always cancel. The statement is that this is equal to sum over the ratio of all diagrams of n external legs, but without any vacuum bubbles. Now let me just explain what this means that without any vacuum bubbles. OK, so this diagram is a diagram which contains just a straight propagator times a bubble. This diagram contains a vacuum bubble. It contains a vacuum diagram, and this diagram does not. This diagram is connected to itself. Similarly, when you go to next order, as we discussed last time, this diagram, so go to next order times, so this is also a diagram with vacuum bubble because you have this part, which does not have any external legs. This will not arise. When you calculate the ratio, then this kind of diagram we automatically don't include. Just don't need to worry about it. Now when you do the ratio you can just forget about those diagrams and directly write down those diagrams without vacuum bubbles. That just a greatly simplifies your life. OK One second. So this statement is simply-- If you think a little bit, then you will see that this cancellation is generic-- actually will work through all orders-- and this thinking process is actually quite instructive, so this is left as a homework. You can read the part in Peskin which tries to explain this cancellation, but that discussion is actually not great. It's not great, that discussion, but you can still read it and get the main idea. And so in homework I ask you to show this yourself. It's not difficult once you get the idea. It's actually pretty simple to see and will be illuminating. Yes? AUDIENCE: I'm pretty confused on this cancellation. So if you're pulling out an order lambda term, won't your order lambda squared would now be order lambda? HONG LIU: No, you pull the whole thing together. Those terms don't matter at this order. Whatever here, when you cancel them, you cancel the whole thing, this term doesn't matter. If you care only of all the lambda squared terms, and then you can just directly cancel these two. You can easily convince yourself. You don't have to do a diagram. Just write something-- a 1 plus lambda plus some other lambda and 1 plus-- Anyway, you can just convince yourself that this always happens. Yes. AUDIENCE: So this cancellation, does it just come out of the math or does it is there like a physical interpretation? HONG LIU: Yeah, there is a physical interpretation. Essentially you just say when you look at-- So this kind of thing essentially comes from the normalization of your state. This thing is just pure vacuum process and just comes from normalization of the states. And when you normalize your states, they just should not contribute to your correlation functions. Just roughly it's like that. Other questions? OK, good. Now let's just put this in practice. Let's put this in practice, and let me just write down the most general, all the diagrams for G2. To lambda squared order. Now let's look at G2 to lambda squared order. We have already found the lambda order. Lambda squared order, we already drew a bunch of diagrams last time, so you only need to keep those which don't include these vacuum bubbles. Let me just write them down. Including this, including that, and also including the one I forgot last time-- this one. That's it. And if I remember, last time we drew many more diagrams, but you don't need to worry about the others, such as diagrams like this. Good. Let's do another example for the 4 point function. Now it's become much easier because you have much fewer diagrams to consider. You have much fewer diagrams to consider. So at the 4 point function level, then you first have your free theory terms. So the free theory returns you just have that. It just contracted with itself. Now we have four external points. This is coming from the free theory contribution at the 0th order. And then the first order, at order lambda, you just have this, so that all the four different phi's from the vertex are contracted with each of the external phi's. This is the first order. And then you can also have a diagram like this. And you can also have-- I will not show all the diagrams. And you can draw more diagrams like this, but these are not vacuum bubbles. And you can also have diagrams like this. The reason I chose them is because you are going to do some of them in your Pset. Yeah, there's another diagram like that, et cetera. Let me just do two more. There's still some more diagrams you can draw to lambda squared order. But again, it's mechanical. With a little bit patience, there's no mystery here. Now let me make some remarks here. Let me make some remarks here. First, so the diagram, you can see separate into two types. So the full Gn separated into two types. One type is called connected diagrams. Connected diagrams, just essentially by its name, this corresponds to diagrams which all the external legs they are connected within a single diagram. This is a connected diagram or this is a connected diagram, but this is not, this is connected diagram, this is connected diagram, this connected diagram, this is not. This is also connected diagram. And you can also have disconnected diagrams. Disconnected diagrams corresponding to diagrams that's here, so your external legs separated into sub-diagrams. They belong to product of sub-diagrams. And later you will see, later we will argue, this is actually not interesting. Only connected diagrams are interesting, but in a few minutes. For the moment, let's just still define them. And emphasize again, the external pi general. Each external line, which you can assign a momentum pi from the Fourier transform. For example here, you have p1, p2, p3, p4 in momentum space. Each one you can assign a momentum and note momentum general. It can be in principle, it can be anything. Good. Any questions on this? Yes. AUDIENCE: Is there a trick to know that you've got all of them, like do you count them? HONG LIU: No, there's no trick to make sure you counted all of them. The only trick is patience. Just try to eliminate all possibilities. It's a finite number of them, you can always do it. Yes. AUDIENCE: Can you use metrics, like count the number of diagrams. HONG LIU: Sorry? AUDIENCE: Can you count the number of diagrams with a certain number of objects and external points? HONG LIU: No, I think there's no general formula to tell you at each order how many diagrams you should have. There's no such kind of magic formula. There's only one formula. It's not the formula. There's only one trend which you can show, that when you go to mth order, there are how many roughly how many diagrams at mth order? Yes. AUDIENCE: m factorial. HONG LIU: That's right, m factorial, so it's quite a lot of them. Once you go to say fourth or fifth order, it becomes a lot. You can be sure that we will not ask you to eliminate to fourth order in your Pset. Actually, the growth of the number of Feynman diagrams implies something deep. It's a side remark. It implies that this perturbation theory in lambda, it's actually not convergent because the number of diagrams grow too fast. And so if at mth order becomes m factorial, then that means the mth order contribution also grows as m factorial, and then that means this power series is actually not convergent. It's only asymptotic series. But still, for many physical purposes, it's enough. Good. Any other questions? Because this concludes our discussion of calculating such kind of Feynman correlation functions for interacting theory. Yes. AUDIENCE: You just said that it's not convergent, but it is asymptotic. HONG LIU: Yeah, it's an asymptotic series. AUDIENCE: Sorry, what's the distinction between those? HONG LIU: Yeah, asymptotic series is the kind of series which you-- Just the higher order terms will be small or will be smaller and smaller for a while and then grow again. And so if you don't calculate to high of an order, they're actually quite reliable. You can bound the errors. This is the heuristic way to say it. It's just when you look at the first few order terms, it's a reliable approximation to your true answer. AUDIENCE: I guess we started out by setting the interaction term to be phi to the 4th, but the action side. But in general, you're going to have some series of terms added to your action. Is that how you would do a general interaction? HONG LIU: No, depends on your specific theory. Maybe that's just your theory. AUDIENCE: How do you apply this to a physical problem? How do you know what-- HONG LIU: Yeah, in physical problem-- In one second we will tell you how you come from here to calculate the scattering amplitude. And when you find the scattering amplitude, then you can measure the experiment, then you try to deduce back the action. AUDIENCE: And in general what changes is just how many edges your vertex has? Is that the phi to the n? HONG LIU: Yeah, that's right, that's right. AUDIENCE: What if it's, I don't know, 1 over like something [INAUDIBLE],, is that-- HONG LIU: It can happen. As a toy model, you can always write whatever theory you want, but in nature somehow the series we discovered so is all polynomial. OK, good? Earlier we mentioned that there is this LSZ theorem, which tells you that the scattering amplitude can be obtained from this time-ordered correlation function. Now let's go back to this LSZ story again to tell you actually how to obtain scattering amplitude from correlation functions. The basic idea is the following. The basic idea is following-- is that you take your n-point function with n external momentum p1 to pn. So as I emphasized here, each pi is general. But now suppose you take all momentum, all pi, to be on shell. Now you consider them to satisfy pi squared equal to minus m squared. In order to get the amplitude, you need to do this step. You take all the external momentum. This is very reasonable to expect because when we're scattering particles, remember we say the initial particle can be considered as a free particle, final particle can also be considered as a free particle. If they are free particles, then their momentum satisfies this on-shell condition. In order to relate this to scattering amplitude, then we need to take external momentum to be on-shell. Now if you look at this, you say there are two possible choices for this equation because this equation has two solutions. So for each pi, pi0 can either be plus minus omega pi. We are using the same notation we used before, so this Pi vector is the spatial part, and then we have omega Pi. This omega pi is just the--ok? So you have two possible solutions. Now the rule is the following-- let's take p1, take m of them. We take the negative root. So alpha would be 1 over 2m. Then for the rest, say Pm plus 1, let's consider the following situation. And then you take the positive root. Ok? So let's consider the following situation. Let's imagine we take such an n-point function in momentum space, and then we take all the momentum on-shell, but for some of them we take the negative root and some of them we take the positive root, and then there's a theorem. So this is called LSZ theorem, which is 3 person-- Lehmann, Symanzik, and Zimmerman. This theorem is says, under this limit. Under this limit. Under this on-shell limit. So you will see why we emphasize that we call it on-shell limit. So imagine you don't actually put them to be exactly a momentum. You take the momentum to approach that. We approach that, we just emphasize. So in this limit then you find your momentum space correlation function. Can you find your momentum space correlation function approach to the following quantity in this limit. First, we write down the expression and then we will explain what this means. And then we just do the downstairs. Do the downstairs. Plus infinity. So the statement is the following, and here Z is just some constant. We don't need to worry about it. Z is just some constant. This is square root Z divided by this factor for each external momentum, and this is the scattering amplitude for m initial particles. Oh, sorry, I forgot the minus sign. It should be minus p1, minus p2, to minus pm. This is the scattering amplitude for particles with initial momentum minus p1 to minus pm into particles with final momentum m plus 1 to pn. Essentially, whether you choose the negative root or the positive root decides whether you are in the initial state or in the final state. Whether you are in the final state in the scattering amplitude. Let me just write here. Let me just copy this again. This scattering amplitude-- Yeah, let me just write here this pm, pm plus 1, pm plus infinity minus p1 minus pm minus infinity. This is the scattering amplitude. This is the scattering amplitude. This is an amplitude for the particle with momentum p1 pm at t equal to minus infinity transitioning to the rest particles at this momentum at t plus infinity. To save the location, again, remember previously we defined-- If you remember before, we defined this object. This object should also preserve, because of time translation symmetry, this object will also preserve the energy and momentum. This object is also proportional, so you can extract out the delta function and then times a quantity, which we called the scattering amplitude for, say alpha to beta. Alpha denotes the initial state, so this is the initial state alpha, and this is the final state beta. Now if you plug this expression into here, then the delta function will just cancel on both sides. And then you find Gn p1 pn is just equal to, approach this product of the delta functions, approach this factor, and then M alpha to beta. This is a statement of the LSZ theorem. This essentially tells you that such a scattering amplitude just can be obtained from this, and you multiply those things, and then just equal to that. Now you observe, up to this factor of Z, these are precisely just propagators for the external legs. Now you conclude, in other words, that means that M alpha beta then corresponding to you just take the G, then you get rid of all the external propagators because they're just corresponding to the external propagators for each external leg. Just get rid of the external legs. Here I'm just telling you the theorem and we will use it, but of course, the derivation is a little bit complicated. Let me just write it in words. To obtain the scattering amplitude M to alpha beta, you first calculate. Suppose the total number of initial particles and the final particles are n and then you just take Gn for n particles. And then take pi on-shell, and the initial momentum take the negative root, and in the final momentum you take a positive root of that equation. And then the last step you just truncate all the external propagators. And then you truncated all the external legs. So that just comes from you need to obtain M. You need to divide this side by this factor then up to this factor of Z. This factor Z will only give you a constant, which let's not worry about it for now. Yes? AUDIENCE: [INAUDIBLE]. it seems like you just put it in ad hoc. HONG LIU: Which is putting ad hoc? AUDIENCE: Why are you just setting some of them to be negative? HONG LIU: Oh, yeah. That I will explain. That I will explain. First, let me just state the rule. First, you understand the rule, then I try to make you more comfortable with the rule. Yes? OK. Yeah, here is just the rule. Here I'm just explaining here is the rule. This is a statement you can prove mathematically, so this is a mathematical statement you can prove. And then following from this statement, here is the rule to obtain the M alpha beta from the G. If I want to write it in one sentence-- scattering amplitude equals to sum over all truncated diagrams with external momentum on-shell. If I summarize it into one sentence, it's just this. I would say the only thing unintuitive here is this sign choice. Everything else I think is intuitive. Everything else is intuitive. You take this Green function, which contains essentially any external legs. And now you just imagine this internal legs corresponding to initial and final particles, and then it's very reasonable we make them on-shell because they are physical particles. Then the only thing which may be unintuitive is that why for the initial momentum we need to take the negative root, but for the final one we need to take a positive root. Then the final thing here is just if you imagine when you scatter particles and how they propagate to the scattering does not matter. The propagator just essentially tells you this particle how they propagate to the scattering point. This should not matter for the scattering process. We will explain this at the end, but before we do that, let's just look at one example. Do you have any questions? Yes. AUDIENCE: I just to make sure I understand when we say truncate we mean divide n and divide Gn and by this product. HONG LIU: Yeah, you will see explicitly we just don't include all the external propagators. Let me give you an example. It will be clear. Let me just give you an example. Let's consider four particles scattering. Let's consider. Following the convention here is the minus p1 p2, so you have minus p1 and minus p2, and then you have p3 and p4. Let's suppose we want to calculate such kind of scattering amplitude to initial particles with momentum minus p1 and minus p2. When you take this to be the negative root, then the minus p1 will actually have positive energy. The minus p1 will actually have a positive energy, so all of them have positive energy. All of them have positive energy. If you want to calculate in this case, then you just take the G4 p1 p4, and then you just take p1 and p2 to take to be negative root. Just to save the time to take to the negative root. And then p3 p4 to be the positive root. And then you put the t here, then you truncate your external propagator. Then you truncate your external propagator. What this means, let's look at the leading order contribution to this. The leading order contribution to that, we already have the diagram there. The leading order, in the free theory, those processes don't contribute because those things, there's no interaction here. Particle just propagates. So the lowest diagram is this one. The lowest diagram is this one, so we just consider that diagram, so that's just leading order given by that diagram. Then you have minus p1 minus p2 and then p3 and p4. Now according to our Feynman rule, this vertex just gives you a factor of minus i lambda. So if we want to calculate this Green function, then we need to include all these four propagators. But here it tells us to truncate, but all the propagators here are external, so we don't include them. So the answer here is simple-- just minus i lambda. The leading order contribution to these 4 particles scattering is just minus i lambda. And then you have all the lambda squared contributions. Yes? AUDIENCE: Why didn't we consider the other, the one where one is just propagating and the other has a loop? HONG LIU: I'm going to mention that good. That's a good question. I'm going to mention that. Again, that one does not correspond to the scattering. Essentially it's just corresponding to one particle propagated because there's no interaction between the particles. They factorize. This one involves the two particles then, there's an interaction, you go to two other particles. Here just one particle straight goes to the other particle, so this particle by definition has to be the same as this one, this has to be the same, so there's no scattering. I will mention this point separately a little bit later. Another possibility if we want to consider the decay of a particle into three other particles. The decay of a particle into three other particles. In this case then, again, we take G4, so now we take minus p1, and then p2, p3, p4, and now we take p1, p4. Now we take p1 goes to minus root, p1 0 is the minus omega, and then p3, 2, 3, 4 take it to be the positive root. And then that's corresponding to this process. And again, the leading term, the nontrivial term is just this. The leading nontrivial term is this, so now this is minus p1, so now this is p2, p3, p4. Again, the leading order, this is just minus i lambda because you don't need to include the external legs. This goes on to a single particle decay into three other particles. AUDIENCE: In the phi 4 theory, there's no conservation law or anything for preventing this, right? HONG LIU: No, no. So now let me make some remarks. Oh, yeah. I erased this, but yeah, I'm fine. So the remarks. You already asked, when we calculate this to the leading order, we only included the connected diagrams. The disconnected diagrams in general factorize. It means that when you have disconnected diagram, it means that your process factorizes into subscattering process of lower orders. As we were doing here, there's one other diagram which corresponds to just this one. Just corresponding to this, you have p1 minus p1, minus p2, p3, p4. You can also have a diagram like this. But in this case, just corresponding to p1 separately go to p3 and p2 separately goes to p4, so nothing really happens. So in general, you have m particle scattering. You have m particle scattering. So with some number of initial particles and some number of final particles. When you have a disconnected diagram, it means that you have a situation like this. You have one diagram with some subset of particles go to some subset of particles, and with a lot of diagrams some set of particles, some set of particles. This is a three to three scattering, and then essentially this factorizes into two to two scattering, and then one particle just goes to another particle. This can already be understood in terms of lower order process. We don't include them. Essentially, they are already understood. When we define the scattering amplitude, we don't include them. We are only interested in process involving all participating particles. We are actually running out of time. This is first remark. Second remark. Since we truncate all the external legs, we truncate all the external propagators. This kind of diagram, you have a loop on one of the external propagators, so this diagram doesn't contribute. Because this one, after you have truncated the external legs, it's no different from this one, it's no different from that one, so you don't have to count this one separately. So this one is just corresponding to when the particle propagates to infinity somehow there's some other process to happen to this particular particle. When you truncate the external legs, they don't include them. So that means, with these two, it means that the scattering amplitude-- Essentially can forget anything happening to the external legs. You can have complicated things happen external legs, but they don't matter. To summarize, the scattering amplitude, when you compute the scattering amplitude, you just sum over-- following this definition-- you sum over all truncated. It means that you truncate all the external legs. Connected because we want all particles to participate. And then the external particle on-shell. This is a simplification compared to calculating the general correlation functions. I have two other remarks to make, but we don't have time. One remark contains the choice of this sign-- why initial momentum corresponding to negative root and why this corresponding to positive root, and a lot of remarks related to this factor of Z. We will talk about them at the beginning of next lecture. And then after that, then we are done with the general formulation of interacting theory. And now we will be equipped with the power. Essentially, you can calculate any interacting theory, any amplitude, using perturbation theory. But in order to calculate anything useful, we still have to first learn how to treat fermions, how to treat photons, and then that's what we will do next. So we will, starting next lecture, talk about how to deal with fermions, how to introduce fermions, and after that we'll talk about how to introduce photons, and then we can talk about QED, how photons interact with electrons, et cetera. OK, good. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_23_Cross_Section_and_Decay_Rate.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: OK, so let us start. So last time, at the end of the lecture, we discussed relativistic generalization of the cross section. So we want to derive a relativistic generalization of the cross section from some initial state to some final state. So initial state is denoted by alpha, the final state denoted by beta. So we will, for most physical process, we will consider the two particles scattering. So this is just two particles. And they scatter. So this is our initial state, always only have two particles. But the final state can have arbitrary number of particles. You can have an arbitrary number of particles. And then we define, so we require that d sigma alpha beta to be Lorentz invariant. But because the probability for-- that should not depend on say, your frame. We also want to define to be symmetric in 1 to 2. So to make a connection with the nonrelativistic story, we can consider the rest frame, say-- say as a starting point, we can consider the rest frame of particle 2. You can always go to the rest frame of particle 2. And then in that frame, then we can define this quantity to be the probability per unit time from alpha to beta dt, and then divided by the incident flux of particle 1. So this is the p alpha beta t is the probability-- let me just write it. So this is the probability per unit time for this process from alpha to beta. And then, the dN1/dt dA would be the incident flux of particle 1. So this expression is a natural generalization to relativistic context, which in the rest frame of particle 2, in the rest frame of particle 2. And so, our strategy is to first work out this quantity in the rest frame of 2. And then, we use the Lorentz, the requirement of the Lorentz invariant and the symmetric in 1 and 2 to write it in the general frame. So that's our strategy. And so first, let's talk about this quantity, this probability from alpha to beta. So we have the probability from alpha to beta. So by definition, it's defined as the following. So we want to look at the transition amplitude, which gives you the beta state at time plus infinity and from alpha at time minus infinity. So this is the amplitude. And then we take the square. And then we divide by the normalization over the beta itself under the alpha itself. We divided by the normalization of the beta and alpha, the state themselves. So that is the whole thing, really defined the probability. So the upstairs, we already defined before. So previously, we defined the beta plus, infinity alpha minus infinity. So this is by our previous definition is given by 2 pi to the power 4 delta, p alpha p beta. So the p alpha p beta is the total momentum of the initial and final state. And times M alpha beta. M alpha beta is defined to be the scattering amplitude from alpha to beta, So this was essentially, our definition of the scattering amplitude. Yes? AUDIENCE: T minus-- so the plus infinity, minus infinity, is that in space time or just time? HONG LIU: Time, yeah. it means that the T equal to minus infinity. So this is the Heisenberg state. So this is the Heisenberg picture. So this means that at t equal to minus infinity, we have a free particle state described by alpha. And the t equal to plus infinity, we have free particle state, given by beta. Other questions? So now, let's take the square. Now, let's take the square. So remember, so the reason we need to put in downstairs, you say, why do we need to put the downstairs? Because are they supposed to be normalized? But in practice when we do the calculation, or in practice when we talk about the free particles, we use the plane wave basis. But plane wave, remember, is not normalizable. It's only normalizable as a delta function. So the normalization is not equal to 1, exactly. So we have to be careful. So that's why we need to include them here. So yeah. Good? So now, let's look at this quantity squared. So let's just move it down. So to save time, I just want to copy it. So this quantity squared then just equal to the right hand side square. So we have 2 pi 4 the delta function square. And then we have M alpha beta squared. And now, let's look at the delta function square. So the delta function squared will have two pieces. So the delta function square-- let me just save time. So this square, then we can take one piece, one copy of it. We just write it as p alpha minus p beta. And then the other piece will be just delta 4, we can just evaluate at the 0. Because we already set the p alpha equal to beta from the first delta function. So the second factor can be just become delta 0. So now we know how to-- so we have seen this before. So what is this object? Do you remember? Yes? AUDIENCE: Space time volume. HONG LIU: Yeah, it's the total space time volume. The reason is that the 2 pi delta 4 0. So this is the 0 in momentum space. So this is just equal to d4x exponential ikx evaluated at the k equal to 0, evaluated at the k equal to 0. Because this is giving you 2 two pi delta k. And so you evaluate at k = 0. When you evaluate the k = 0, then of course, this is just the space time volume. So I take the spatial volume to V. And then this T, is the time period. So physically, you can interpret the T as the duration at the time duration we actually doing the experiment. Because only that part is relevant for our experiment. So this capital T can be interpreted -- OK, good? And now, we can just replace here, so now we can just replace here by V times T. We can just replace here, V times T. So now let's look at the downstairs. Now, let's look at downstairs. The downstairs, so the alpha and the beta will be just say-- yeah, so the typical alpha, so the alpha state would be say, if your initial momentum is P1 and P2, it would be like this. For scalar particle, it just will be specified by P1 and P2. And if you have polarizations, then there will be polarizations. Here, I'll just give you an example. So now, if you look at the normalization of this kind of states, remember the P1 and P2 are defined. So remember, if you have a single particle state of momentum p. So that is normalized to be 2 pi cubed 2p0 delta 3 p minus p-prime, because they're always on-shell. Initial and final state, they on-shell. So p0 are functions of p. So that's how we normalize them. So p is normalized by square root omega k omega p, say ap acting on 0. And for example, for scalar particle, and then you look at the normalization just like this. So now if you look at the p with itself, then you have p equal to p-prime. And then you have essentially, you just have 2p0 times V. The V is the spatial volume. Because here you just have 2 pi cubed the delta 0. And delta 0 is in the spatial momentum. And spatial momentum for the same reason, just gave you the spatial volume. And so now, we can just apply these two to alpha and beta. So the alpha, So if I take the alpha to have momentum P1, P2, you can see this particle of momentum p1 and p2 on beta, to consist of momentum say, k1 and kn. Suppose there are n particles, so n can be arbitrary integers. And then from here, then the alpha with itself, would be just 2 E1 times the volume and 2 E2 times volume. E1 and E2 are the energy over the p1 and p2. So E1 is equal to P 1,0 and equal to P 2,0. And similarly, the beta would be to take the product j from 1 to n, 2kj, 0'th component times V. And so example, just kj0 just equal to kj squared plus mj squared. So here, we allow the different particles to have different mass and similarly with p 1,0 and p2. OK so now with those preparations, now we can write down dpt. So now, the dp alpha to beta dt then essentially just equal to p alpha beta divided by t, the total duration of the physical process. And so, the reason they just relate it. You say, how can you do this? This is a differential. And how can you just divide it by the t? Do you know a reason why we can just divide by this? Yes? AUDIENCE: Well presumably, it's the probability was small or something. You would just expect it to go up linearly in time. HONG LIU: Just because of-- yeah, that's a good statement. But the statement, I wanted you to say is that because the time translation symmetry. And so, you would expect that the probability per unit time should be independent of time. And so the probability, say for duration of time, and you can just multiply by the total time. And indeed, in order to make this equation make sense. Of course, the p has to be small. Because otherwise, if you multiply-- yeah, this thing has to be small. Otherwise, when you multiply by T, of course it will be greater than 1 But just to divide by T, it's from the time translation symmetry. So this is good because we know here, from the delta function, there's a factor of T. There's a factor of T. So if we combine everything together, and then we can just-- then the upstairs is given by, so we have the 2 pi power of 4 delta 4 p alpha p beta, the m alpha to beta squared times VT. And then divided by T, and then this T go away, so I can just erase this T. So this is the upstairs, and then divided by T, a capital T. And then the downstairs, we just copy these two expression. Downstairs, I just copy this expression and this expression. So this is 4 E1E2 times V square and then k from 1 to n 2kj0 times V. Yes? AUDIENCE: So now taking T to be the total time of your experiment, are you taking V to be the volume of your experiment? HONG LIU: Yeah. AUDIENCE: Of your detector? HONG LIU: No, it's not the detector. It's just, we are computing S matrix. So x matrix, you always assume you wait for a long time so that your initial state is free, and your final state is free. So of course, it's much not. It's the range of your detector. You have to put the detector very far away in order to measure the particle. You can imagine the V just essentially is the volume, which the experiment is happening. Other questions? So, you see there's all this V flying around. So they look very unpleasant. But don't worry. Later, if you are doing the right thing, then all these unpleasant things will go away. So that's the rule of physics. And in the intermediate step, you may see a lot of unpleasant things. But if you are doing the right thing and then all this unpleasant thing will go away, or you say, well, then I must be doing the right thing. So here, actually this probability is not the probability we actually measure. Because here, I assume that the final state-- here I'll assume the final state-- yeah, here, I assume the final states have precise momentum k1, kn. So in reality, of course, we will not be able to make the precise measurements. So in real experiment, detectors have finite resolutions. So what we measure is actually when we say the particle say, have momentum k1, we actually means that the particle 1, it's within some dk 1 around k1. So there's always some finite neighborhood of k1 which are allowed by our detector resolution. And similarly, in reality, it's particle 2 in dk2 around k2, et cetera. So the particle n, dkn around kn. So then, we actually need to integrate over all these resolutions. So that means that we should multiply dP/T. This corresponding to sharp final momentum, and we should multiply by those uncertainties. So that means that, so if I take this equation to be star, this equation to be star, so the one actually, is measured by the experiment. When we say P alpha beta dt, from the real experiment, it's actually corresponding to the star times-- those uncertainties around the-- so j equal to 1 to n d3 kj 2 pi cubed. So we need to multiply the number of states within each dk momentum space volume, So the number of the state, and the number of states is given by this times the volume. So do you remember where this volume come from? Yes? AUDIENCE: Density of states. HONG LIU: Exactly. So this gives you the density of states. It's because the-- so if I just remind you-- remember if you-- so the way to think about it, just imagine you put your system in the box, and then your energy level will be quantized in terms of 2 pi, say 2 pi cubed divided by the volume. So this is the number of states in momentum space, the density of states. And then you multiply it by the dk. So this is actually the quantity we are interested in. This is the quantity we're interested in. Any questions on this? So now, let's just plug that star into here. So now, the nice thing, now we can count the number of Vs. If we are doing the right thing, all the V should cancel except yeah, but not quite yet. So upstairs, we have one V. Here we have V squared. So let's first cancel this V so that we don't have to worry about that. So let's first cancel that V. And now, we have downstairs, we have V for each j. And the upstairs, now we multiply this by this. And then for each j, have one V. So all this V will cancel that V. All this V will cancel that V. So we're left with only this single V, this single V. So now, we can write it as the following. So I can write it this as M alpha beta squared. So I just copy this. And then I divide it by-- let's keep this-- divided by 4 E1E2 times V. And then I group everything else into what I call d mu. And d mu is everything else. D mu is defined to be this delta function. And then, j from 1 to n, d3 kj 2 pi cubed 2kj0. So I just combine these two products, combine these two products. So the nice thing, the reason I group all this together because now d mu now is Lorenz invariant. So this is a Lorentz invariant measure. Because you remember, this Lorenz invariant. And remember, this combination is also Lorentz invariant from your first day, essentially in QFT, in your first pset. So now, this nice thing is that this now is a Lorentz invariant. So now, we have this nice expression. We have this nice expression. We have this, by definition, is Lorentz invariant. And then we have this 4 E1E2, and times V, 4 E1 times V. And now, we want to write them into a Lorentz invariant way. But we haven't done it. This is only upstairs. This by itself, is not Lorenz invariant. We have to divide it by the downstairs. So the downstairs is the flux. It's the flux. So now, in the rest frame of particle 2, so lets calculate the flux of the particle. So the flux of the particle-- so we need to calculate the flux of particle 1. So that's the thing we need to divide. So this is the flux of the particle 1. So the flux is the number of particle per unit time and per area, per unit time, per area. So this is the same as the number of the particle, number density of the particle 1 times the velocity of the particle 1. So remember, the flux is essentially the density times the velocity. So this gives you the number of particles per volume. And this is giving you the distance traveled per unit time. And so together, they give you that. So this is the velocity, so this is the density of particle 1. So this is all in the rest frame of particle 2. So now in this experiment, we have two particle scattering. So what do you think will be the density of 1. So what do you think is the n1? Yes? AUDIENCE: 1. HONG LIU: Exactly. How many particles do we have? We only have one particle. So this is just given as 1/V. So this is just given by 1/V. So now we can find the d sigma. So the d sigma alpha 2 beta, then is defined to be dp/dt now divided by the flux of 1, particle 1. So now, this is just given by-- now the volume cancel. Because n1 is 1 over volume. And then, there's a volume here. And now finally, this volume cancel. So now, if we divide that by the flux and the volume cancel, and then we get m alpha beta squared d mu. And then divide it by E1E2, and v1, the velocity of the particle 1. So v1, of course, can also be interpreted as a relative velocity between 1 and 2. Anyway, so this is the expression we find. So this is the expression in the rest frame of particle 2. So by definition, we want this thing to be Lorentz invariant and the symmetric in 1 and 2. Yes? AUDIENCE: I was thinking like regarding the cancellation of the 1 over V factor, is another way to interpret it like, you think of instead of a plain wave, like a narrow wave packet or something centered at k1. So you would get like a V from the wave packet sensitive states and that would cancel. Is that valid? HONG LIU: Yeah, I think it's the similar idea. Because let me see-- yeah, because essentially, you get rid of that V. When you consider the wave packet, then you get rid of this V. And then, you also get rid of this V. So you just get rid of the V one upstairs one downstairs, yeah, that's right. Good. So now, we want to write this in the Lorentz invariant form. So this is emphasized this is in the rest frame of 2. So now, I look for object, so now, I want to look for object, which I will call it sigma. So sigma is Lorentz invariant. And let me write here, sigma is Lorentz invariant. And the symmetric in 1 and the 2 and in the rest frame of 2, then the sigma becomes this downstairs, become this E1E2 times V1. So this is not the manifest Lorentz invariant object. It's also not symmetric in 1 or 2. But that we should be able to find the object sigma that by itself, is a Lorentz invariant, symmetric in 1 and 2 and in the rest frame of 2, reduced to this object. If I can find this object, then I'm done in finding the cross section. Yes? AUDIENCE: Should there be a factor of 4 in the denominator? HONG LIU: There should be a factor of 4-- yeah, there should be. Thank you. Yeah, yes, some other questions? Good. So I want to look for this object sigma. And so you can do a little bit trial and error to find such a sigma. So I will just write down the answer for you. And to save you the trouble, I could have put it in your pset, but I decided not to do it. So turned out the sigma is given by-- so when you write down the answer, it's very simple. You can almost guess it in a sense. So P1 dot P2 squared minus m1 squared times m2 squared. m1 and m2 are the mass of the particle 1 and 2. So this object satisfies these three conditions, this object satisfies these conditions. Also, I will leave the exercise for yourself to check it. It's very easy to check it. So you just go to the rest frame of 2. And then you can check if it's reduced to that. So now, we almost have our final answer for the cross section so the d sigma for these 2 to n scattering. So now, we just collect our final result. So we have the sigma alpha beta now, is equal to m alpha beta squared, d mu divided by 4 sigma, just d mu. So this is just our final answer. And the sigma is given by that guy, a capital sigma just given by that guy. And so, this is manifested in Lorenz invariant. And the manifest symmetric on the 1 and 2, because sigma is obviously symmetric on the 1 and 2. And this is a very beautiful formula. Even though we went through a lot of trouble, went through a lot of Vs and Ts. But in the end, we get a very beautiful answer. So now, let's can see some kinematic regimes of this formula. So now, let's talk about some kinematics of this formula. So it's convenient. So we have two particles, two initial particles. It's convenient to introduce the center of mass energy for the full system. So we can define something called s, small s. This is not big S. So big S is reserved for action. So this is small s P1 plus P2 squared. So P1 plus P2 is that of the total momentum of the initial state. Of course, it's also the total momentum of your final state, just from the momentum conservation. And then if I look at P1 and P2 squared, and then this is just essentially the invariant mass for your full system. So this s is-- the square root of s is the invariant mass. It's the effective mass of the whole system. Of this though, when I say the whole system, I just mean, the whole system of particle 1, particle 2, and also the final state. So we can also write this sigma, turns out we can actually write sigma in terms of s, because the-- so because P1 dot P2 can be written as 1/2 half P1 plus P2 squared minus P1 squared minus P2 square. So this is just m squared. So P1 squared is minus m1 squared. P2 squared is minus m2 squared. So this is just equal to minus 1/2 s-- sorry, it should be totaled. S minus m1 squared minus m2 square. so P1 and P2 squared is just equal to minus s. And then, the sigma just then, just can be expressed in terms of that. So the sigma-- so let me just write down the final expression for the sigma in terms of s, just equal to square root s squared 2s m1 squared plus m2 squared plus m1 squared minus 2 squared squared. So this whole kinematic factor of sigma can be just expressed in terms of s. Any questions on this? So now, another thing is that often, we-- even though this formula can be used in any frame, but sometimes, depending on your question, the expression is simpler in some frame than some other frame. So one of the very frequently used frame is so-called the center of mass frame. It's called the center of mass frame. So in the center of mass frame, so the center of mass frame, essentially, it's in this frame that the total center of mass of the system does not move. So it means that the total momentum in the center of mass frame, total momentum-- let me see. The total momentum equal to p1 p2 is taken to be 0. So the full system is not moving. And so, that means we can take -- so means that P1 equal to minus P2, so equal to, let's call it Pcm. It just means the center of mass momentum. And now, you can find the Pcm from just solving the-- you can also express the Pcm in terms of the mass, it's because E1 plus E2 equal to square root of s. So this is the total. So in the center of mass frame, the total momentum is 0. Then that means that E1E2-- so here, there's no spatial part contribution. And here, you just have E1 plus E2 squared equal to s. So E1 plus E2 equal to square root s. And then you can actually solve for p by plus m1 squared plus m2 squared equal to square root of s. So you can now, solve the center of mass in terms of s. So this is a simple equation, which you can solve. So this is a middle school equation. But turns out the result is very simple. Turns out the Pcm, the magnitude when you solve this equation, you find that this is precisely equal to this sigma divided by square root s. So this is a very beautiful simple formula, given by sigma divided by square root of s. Or in other words, the sigma can be written in terms of the center of mass, momentum and magnitude times the square root s, times square root s. So now, we can simplify that the expression center of mass frame. So in the center of mass frame, now we have in the center of mass frame, now we have the sigma is equal to-- so let me just save. Yeah, just you should assume that the subscript alpha to beta, d mu, and then divided by 4, center of mass momentum, magnitude of center of mass of momentum times square root of s. So we have a very simple formula. So the most of the time, for most questions we are interested in, as you will see in next lecture, so most question we are interesting actually corresponding to 2 to 2 scattering. So now let's specifier 2 to 2 scatterings. So the final state only also only contain two particles. So in this case, we can simplify the d mu, further simplify d mu. So let's just write down the-- so in the 2 to 2 scattering, you essentially you have some particle come in, say p1 p2, and then you have two final particle k1, k2. And then they're intact. So now let's write down this d mu explicitly for this case. Let's write down d mu explicitly for this case. So the d mu, just given by-- so P1 plus P2 minus k1 minus k2. And then times, d3 k1 divided by 2 pi cubed 2k 1,0. Let's just call it 2 1 k0 E1 prime and d3 k2 2 pi cubed, then 2 E2 prime. So the E1 prime and E2 prime, they are just energy for k1 and k2. So E1 prime equal to k 1,0 E2 prime k2. For example, yeah, so I will deload to the mass for the two final particles, m1 prime and m2 prime. So m prime 2 prime will be the mass of the two final particles. So before simplifying this a little bit further, let us first introduce some notations. So it's often convenient so many of you may have already seen this before, often convenient to introduce the following quantities-- so essentially, they characterize all the Lorentz invariant quantities we can build up from k P1 P2, k1 k2. So you can have so-called t is defined to be minus p1 minus k1 square, which is also the same from the momentum conservation, p2 minus k2 square. So u minus p1 minus k2 square, the same, minus p2 minus k1 square. So let me just copy the s is p1 plus p2 squared. So these are the quantities, which are obviously Lorentz invariant. And they are the Lorentz invariant. You can build up-- Lorentz invariant quantities, you can build up from say, p1 p2, k1 and k2. So any Lorentz invariant quantities, which you can build from those four momentum, can be expressed in terms of some combinations of s t u. In fact, s to u themselves, are not independent of each other. There's only two independent Lorentz invariants. With momentum conservation, there are only two independent Lorentz invariants you can introduce. So s plus t and plus u is actually not independent of each other. You can show. So this again, I give you the exercise for yourself. You can show that. So if you know any of the two, then you know the s, you know the other one. And anything can be expressed in terms of these variables Any questions? Yes? AUDIENCE: Written down, definitions are useful, for instance, these. How do they correspond to different kinematics? HONG LIU: What do you mean, how do they? AUDIENCE: Feynman diagrams. HONG LIU: Oh yeah, then you Feynman diagrams can be conveniently expressed in terms of them, as we will see in next lecture and in homework. So this quantity this quantity is Lorentz invariant. So that can be conveniently expressed in terms of those quantities. So now, including the d mu, so now, let's further simplify d mu in the center of mass frame. So this quantity, we can try to simplify further. So we can-- so now, let's consider to do this in the center of mass frame. So in the center of mass frame, for 2 to 2 scattering is particularly simple, because your incoming particle have the opposite momentum. So one is Pcm. And the y will be minus Pcm and the spatial momentum. And then the final particle, they must also have opposite momentum. Because the total momentum, total spatial momentum in the center of mass frame is 0. So let's can call this k cm then this must be minus k cm OK, must be minus cm. So for simplicity, let's just call this k cm just k. So now, we can just -- now let's just with this in mind, and let's just look at this d mu. Look at this d mu. So there are a few steps. So maybe we don't want to go through all the details of these steps. Let me. See I think, yeah, let me just outline some here. So we'll try to only go through some of the steps, not to doing all the steps. So now in the center of mass frame, then you find this d mu can be written as-- so first, these two pi forth. And then you have two factor of 2 pi cubed. And then you cancel. So you have this 2 pi squared. Then you have 4 E1 prime E2 prime. Then you have one set of functions. So you have k1, you have E1 prime plus E2 prime minus square root s times delta 3 k1 plus k2 and the d3 k1 and d3 k2. So as we said, the k3, they have to be equal and opposite to each other. And then you can just evaluate one delta function. And then the other dk will remain. The other dk will remain. So essentially, you just have-- essentially, we can just forget about this. And then you just have dk, just have dk, d3k. And this d3k, you can write it as dk, the magnitude of k and k square, and then the center of mass -- solid angle the angle in the center of mass frame. So this k vector can be decomposed into the angular directions and the magnitude. And then, now both E1 and E2 are expressed in terms of k squared plus m1 prime squared and the E2 prime is equal to k squared plus m2 prime squared. So, now you can further evaluate this delta function. Now you can further evaluate this function. Because the E1 prime and E2 prime, they're just the functions of magnitude of k squared. So you can evaluate this delta function. You have dk over the magnitude of k here. So you can do that, so I will not go into detail. I will not going to detail. And so, when you do that, when you solve that delta function and then you find from here, then you find that the final answer is given by-- you find the final answer is given by very simple. You find given by k cm divided by 14 pi square square root s d omega center of mass. And this k cm is defined to be the solution of this equation. So k cm is the solution to E1 prime plus E2 prime equal to square root s. So again, this is expressed in terms of the s. So through some, just technically, just evaluate this delta function. Yes? AUDIENCE: So when we calculate this, are we integrating over all the cases? Is that why we can evaluate all these type of functions? HONG LIU: So you only evaluate the k around the momentum shell. So you can evaluate the delta function, because we always have a finite range of k to integrate over. So it doesn't matter how wide the range of this. Because the non-zero value of the mu is always around that will satisfy the momentum conservation. Does this answer your question? AUDIENCE: Thanks. HONG LIU: Yeah, good. Other questions? Here, I didn't write-- I think, you may wonder here, I didn't write the integral sign. How can I just evaluate the delta function? The reason you can evaluate delta function is just no matter what is the range of dk you integrate over, you will always evaluate around those delta functions. And so you can always take care of them. So now, if you combine this result and this result, and now, you can write a simple expression for d sigma in the center of mass frame. So if you combine them together, then we find the sigma, the omega center of mass frame, is equal to M alpha beta squared divided by 16, 64 pi square s and k cm divided by pcm. And the k cm is the solution to this equation. And the pcm is the solution to this equation. So the pcm is the equation to this equation. So they are given by the same equation, just replace the m1 m2 by m1 prime and m2 prime. So this is the final answer for the differential cross section for 2 to 2 scattering in the center of mass frame, in the center of mass frame. And so this is the expression, which we will use later for certain-- do you have any questions? OK, good. So this concludes the discussion of the cross section. So it's finally over. It's finally over. But we have to go through this, because this is the kind of thing which we compare with experiments. And in particular, if you calculate the total cross section, and then you can just integrate this over all the solid angle. So before now, talking about the physical process, there's one more thing we need to consider. So here, we consider the initial state to be two particles. Because we say we don't normally do scattering with more than two particles. But there's another situation, which still often could happen. So this is the situation, which initial state only have one particle. So when your initial state only have one particle, what do you have? AUDIENCE: Decay. HONG LIU: Yeah, then that is corresponding to decay. Yeah, so we can still have the situation. If you have an unstable particle, then we can decay. And then it's very important for many, many physical situations, to calculate the decay rate. So now we have p1, say, suppose this is the initial momentum. And decay into k1 plus kn, say final n particles. So now the initial state, only one particle. And the final state is k1 kn. So beta remain the same, but alpha only one particle. And the decay rate is much simpler to define, it's just the dP alpha beta dt. And we don't need to divide it by flux, all those things. So now let me explain a little bit. Again, this dP alpha beta, we always mean-- we mean that the probability of P1 decaying into n final particles. with again, particle 1 with range, with the decay 1 around k1 and the decay around k2, et cetera, and decay n around kn. So when we write d alpha beta, you should imagine this. We already include that. So now, we can just repeat what we did before. So then, this dP alpha beta dt, dP alpha beta, then just given by this thing squared, beta plus infinity square and alpha minus infinity squared, divided by beta and alpha. And times j from 1 to n, d3 kj 2 pi cubed times V. So now, it's the same thing. Now, it's same thing. So the only thing you need to -- and the alpha one, everything else is the same. So everything else is the same. So you just repeat our previous analysis, which I will not repeat. So the only thing different is just now the initial state is just to a single energy. So you just repeat the whole thing. And then you find-- then you find d Gamma alpha beta equal to dP alpha beta divided by t. And then you find that this is just given by 2E1 alpha beta square d mu. D mu is defined in the same way. D mu is defined in the same way. So this is the final answer for the decay case. And the total decay rate would be you integrate just over all momentum. So the total decay rate-- so the total decay rate is just gamma total rate. Then gamma is summing over all possible choices of beta. And then you integrate over all momentum. And the lifetime of the particle, tau is just equal to 1 over gamma. So one thing, one difference with the cross section case-- so the cross action we mentioned before, say it's Lorentz invariant. But decay rate is not. And decay rate does depend on the frame of the particle, does depend on the frame of the particle. So decay rate-- so this lifetime does depend on the frame of the particle and the rest frame of the particle, corresponding to just E equal to m1, just equal to mass of the particle. And so the rest frame decay rate is the smallest among all possible decay rate because of the time dilation. In all other moving frame, because the time dilation, the decay rate become faster. So that's what you meant, that the particles are moving and they have a longer lifetime. They have a longer lifetime. So this makes perfect sense. So any questions on this? Yes? AUDIENCE: Do you need a finite set? HONG LIU: Yeah, in general, in general, for the real-- you never know, but beta is what we discovered. We observe what are the decay final product. But you can also predict, from your theory, what are the possible decay? But in real experiment, there always may be some particles we don't know. There may be some hidden interactions. Other questions? OK, good. So that finally concludes this discussion of the cross section and the decay rate. And now, we can study some process. So we only have 10 minutes to do it. So we will not really be able to do it, just maybe to start it. So in general, we will consider 2 to 2 scatterings. OK In general, we consider 2 to 2 scatterings. And so one remark to make-- and in that case, we have this formula. We have this very nice formula. So for particles, with spin, say whether spin 1/2 or spin 1. Spin 1/2 would be electrons, protons, et cetera. And spin 1 would be photon. And the scattering amplitude then will depend on the polarization of the initial and final particles. So in most experiments -- in most experiments, so the initial beam, they are unpolarized, it's not easy, sometimes to do the polarized beam. So initial state, initial beams are unpolarized-- unpolarized means they're just corresponding to a superposition of all possible kinds of polarizations, incoherent superposition. And then the final state and the spin polarization, a final state normally, it cannot be detected. It's a difficult question to detect the polarization of a particle. And actually, it's not even-- say if you say, observe a new particle, it's not even easy to tell whether this particle is a boson or fermion. So to measure the specific polarization is even harder. Anyway, so the polarization of final state are normally not detected. So in this case, when we compare with experiment, so when we calculate this kind of cross section compare with experiment, we should-- that means that we should to compare with experiment. We should sum over, should sum over polarizations of final state. and average over polarizations of initial state. So for the final state, we need to sum over them. Because we need to sum over all possibilities. But for the initial state all different polarizations contribute. So we need to average over them. We need to average over them. So for example, if we consider such a process, it's a very important process, say in QED is your annhilation of particles. So if you have a particle and antiparticle, then they can annihilate. When they annihilate, then they annihilate into photon. And then the photon can split into some other particle and antiparticle. So this is the process of particle a come in. So this is a, a-bar, this is b and b-bar. So this is a production, the pair creation of b and b-bar from colliding and its antiparticle. So in real life, in real life, by colliding, say, for example, electron plus positron, then you can create many particles. So this is one of the most important way to discover new particles. Many new particles are discovered this way. You just collide the electron and positron. And then you will be able to create new particles. For example, you can create muon and anti-muon. And then you can create quarks and antiquark, et cetera. So in all these cases, both the initial and final particles, they are fermions. So you need to specify their polarization. So this one -- supposed we have p1 r1. So here we have p2. So since this is antiparticle, let's call it r2-bar. And the b would be called-- say this is k1 s1. Say this is k2 s2-bar. And then for the unpolarized process, let me just write down one last formula. So you need to say, suppose the scattering amplitude is M squared, so M. And then we need to, for unpolarized process, so we need to consider-- we need to average initial spin, r1, and average over the final, the r2. And then we need to sum over s1 and the sum over s2. And then, I have Musk squared. So essentially, this becomes 1 over 4. And you sum over all possible spin polarizations of the M squared. So this is the one we can say compare with experiments. So next time, we will write this down explicitly for this kind of process. Yeah, so let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_20_Maxwell_Theory_and_its_Canonical_Quantization.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: So in the last lecture, we have concluded the discussion of fermions. And now, we go to the last missing piece before we can talk about the QED. It's how to quantize the Maxwell field, OK? How to get photon, OK? And so today, we start. OK? So this is a short chapter. I think we should be able to finish it just this week in two lectures. So first, let me just remind you of some aspects of the classical Maxwell theory. And then we will talk about this quantization, OK? So the Lagrangian, say, for the classical Maxwell theory can be written as-- Lagrangian density can be written as F mu nu. So F mu nu is just the standard-- partial mu A nu equal to partial nu A mu. And the J mu is just the electromagnetic current. OK. J mu just electromagnetic current. OK. And then so from this Lagrangian density, then you can derive the equation motion, which is just the Maxwell equation, OK? So the Maxwell equation is given by partial mu F mu nu equal to minus J mu, OK? And so this is the familiar equation which we see from classical electrodynamics, OK? And so if you act partial mu on both sides-- so now, let's consider act partial mu on both sides. Since F mu mu is, by definition, antisymmetric, so this is automatically 0. And so this tells you that by consistency, the current has to be conserved, OK? The current which is appearing on the right-hand side of the Maxwell equation has to be conserved. Otherwise, you don't have a consistent equations, OK? And so another important feature of the Maxwell equation is so-called the gauge symmetry. So the Maxwell action-- both the action and the equation motion are invariant under-- so the L, yeah the action. So you integrate this over full spacetime. So it's invariant under the following transformation-- that A mu goes to A mu prime equal to A mu plus partial mu lambda x. OK? And here the lambda x is an arbitrary function of spacetime coordinates, OK? It's an arbitrary function. So the fact that the Maxwell theory-- the action is invariant under this transformation. So you can easily check yourself. Under this transformation, F mu mu does not change, OK? So the transformation between this and this-- they just cancel each other. And so F mu nu does not change. And then you see the equation of motion, of course, is invariant under this transformation because F mu nu does not change. And J mu does not transform. And also, the action is invariant because the F mu nu does not change. And naively, this term does change under the transformation because you have this partial mu lambda, but the additional term corresponding to partial mu lambda times J mu-- but then you can do integration by parts under the integration because J mu is conserved. And so the action is invariant, OK? So the action is invariant. And so this symmetry is a little bit different from the symmetry we have discussed before because the symmetry depends on the function of arbitrary spacetime, of spacetime coordinates. OK. So for this reason and the previous symmetry we have talked about, the transformation parameters are independent of the spacetime coordinates, OK? They're constants. And so that's why this is called the local symmetry. Sometimes, it's also called, just for historical reason, gauge symmetry. OK. It's called local symmetry or gauge symmetry-- and yeah, in contrast to the global symmetry, which we encountered before, whose transformation parameters are spacetime-independent. So the gauge symmetry and the global symmetries-- even though they are all symmetries, they have very, very different physical interpretation, OK? So global symmetry is a genuine symmetry. So the theory's translation variant means that what's happening here in Boston is the same happening in, say, Washington, DC, OK? Generally, things are invariant under that. But the local symmetry is very different, OK? So the local symmetry means-- so the presence of gauge symmetry implies that the system has redundant degrees of freedom. OK? So let me just give you a cartoon to illustrate this point. So the space of A mu, of course, is an infinite dimensional space. But let me just-- imagine this blackboard is the space of A mu, OK? This plane is the space of A mu. Under this gauge transformation, it can be viewed as the following. Say for each A mu, imagine you make an arbitrary transformation-- this lambda x. And so this line-- so let's denote the space of lambda x. OK? So at each point A mu, you can make a transformation for different lambda corresponding to different points, OK? And if you start with a different A mu, then you have a different-- yeah. So each of them is called-- so each of them is called the orbit of gauge transformations. OK? And they are parameterized by lambda, OK? And they're parameterized by lambda. And the fact that the physics is invariant under this transformation means that the physics at this point for this choice of A is the same as for this choice of A, OK? And similarly, the physics on this joint-- just physics on the same orbit is the same, OK? They describe the same physics. So in other words, only a cross-section-- so you can have many orbits, OK? So in each orbit, we can just choose a representative point. And then the physics only depends on a cross-section of all these different orbits, OK? So physics only depends on a cross-section, OK? So a cross-section of this orbit, OK? Yes? AUDIENCE: What's lambda? HONG LIU: Yeah, yeah. Lambda. So different points corresponding to different lambda. So you view this, really, as an infinite dimensional space. And so each point corresponding to a different choice of lambda because lambda can be arbitrary function, OK? And yeah. So yeah. Let me just parameterize by lambda x. OK. So the fact that we can choose this lambda arbitrarily-- so this means in this case, there's one scalar degree of freedom. So lambda is just a scalar field. It can be viewed as a scalar field in spacetime coordinates. So this one scalar degree of freedom as parameterized by lambda is redundant. OK? Because any point on this orbit-- they carry exactly the same physics. They carry the exact same physics. And so any questions on this? OK. So if you remember in classic E&M, when you solve the Maxwell equation, the physical object or the physical quantities-- they are all independent of the choice. Yeah. So each choice of this section is called a gauge, OK? Choice of a gauge. And so the choice of a section is called the choice of a gauge. So when you solve the Maxwell equation, typically, you choose such a section. You choose a gauge. And then you can just solve the Maxwell equation within that gauge so that you no longer have redundant degrees of freedom, OK? So often, this gauge is chosen so that you can solve the Maxwell equation, simplify the task of solving the Maxwell equation, OK? So we often solve the Maxwell equation by fixing a gauge. OK. So there are two common examples which we use to solve the Maxwell equations. So one is called the Coulomb gauge. OK? So in this gauge, you require the spatial components of A mu satisfy this equation, OK? So I will use the notation that A mu equal to phi-- so the zeroth component I call phi. And then the spatial component, the time component I call phi and the spatial component I just call A vector. And also, I often write J mu as the rho, which is a charge density and then the electric current j, OK? So in this Coulomb gauge, then the Maxwell equation is simplified. OK? So you look at this equation. And you look at different components, OK? And so we're not going into detail. Let me just write down the answer. When you impose this Coulomb gauge, you find the Maxwell equation becomes the following. So let me call this equation 1. Call this equation star. So in the Coulomb gauge, you find the equation star becomes-- so the time component becomes the so-called Poisson equation. And then the spatial component becomes like this. OK? So you get these two equations. So let's just remind you a little bit of the physics encoded in these two equations, OK? So this is the equation in which we can use rho, the charge density distribution, to solve for the electric potential, for the scalar potential. But notice that this equation does not involve time derivatives. OK? So this is not dynamical. So phi is not dynamical. It's determined by rho instantly. OK? So this means that the configuration of phi at a given time is just determined by the configuration of rho exactly at that given time, OK? And there's no dynamics in the phi itself, OK? There's no time derivative in phi. There's no dynamic in the phi itself. So this is first. The second thing-- is that let's see what this Coulomb gauge condition means. So if you look at this condition, when you go to momentum space-- so this Fourier transform goes to momentum space. And then that just become k dot A equal to 0, OK? So that just means that-- yes? AUDIENCE: This is a technicality, but in the expression for A, shouldn't it be minus phi because it's A sub u, right? HONG LIU: Which should be minus phi? AUDIENCE: A sub u equals minus phi comma A. Because in the-- HONG LIU: Sorry. Which equation do you say is-- AUDIENCE: Oh, right under Coulomb gauge. In i. No, you're there. HONG LIU: You mean here? AUDIENCE: Yeah, above that. Yeah. HONG LIU: No, this is my convention, right? AUDIENCE: Oh. HONG LIU: Yeah. This is my convention. AUDIENCE: OK. HONG LIU: Yeah. You can call it minus phi. You can call it positive phi. Yeah. [CHUCKLING] So in momentum space, you can just write it as this. So this just means that the component of A parallel to the momentum is 0, OK? A only has component which is perpendicular to the momentum and to the spatial momentum. So we can separate, write A as the longitudinal plus A transverse, OK? The longitudinal is defined to be the component of A, which is proportional to k. So A i L-- when you go to the Fourier space, it will be proportional to k, OK? And then the A i T will be transverse to k, means that A iT dot ki is equal to 0. Then the Coulomb gauge means-- this means that A is equal to A transpose, OK? Just A transpose-- just longitudinal part is 0, OK? The part proportional to the momentum is 0. So now, we can already see what-- so I just combine these two features. Then how many dynamical degrees freedom does the Maxwell theory have? Yes? AUDIENCE: Two. HONG LIU: Why is it two? AUDIENCE: Because phi is not dynamical, so you don't care about that. HONG LIU: Yeah. AUDIENCE: You have two transverse. HONG LIU: That's right. The phi is not dynamical. We don't care. And then this Ai is 0. And so you only have the transverse component. The transverse component is perpendicular to the momentum direction. And you only have two independent components, OK? So we find there are only two transverse dynamical degrees freedom. And indeed, this is the two polarizations. Classically, this is just corresponding two polarizations of electromagnetic wave, OK? So if you look at the EM wave, you only have two independent polarizations. You only have two independent polarizations. So this is the story for the Coulomb gauge. So Coulomb gauge is very convenient and has been widely used-- in particular, in the nonrelativistic situations when you don't involve very fast velocity. But the Coulomb gauge, of course, is not perfect. So there are some drawbacks of the Coulomb gauge. So first, there is no manifest Lorentz symmetry. So the Maxwell equation is manifestly Lorentz covariant, but the Coulomb gauge is not because this condition certainly is not covariant, OK? It is not covariant. It's not the same in different frames, OK? And the one consequence of this loss of the manifest Lorentz covariance is this phi equation is determined instantaneously. So we know that when you have the relativistic theory, then everything should propagate smaller than the speed of light, OK? And so you cannot have such instant action at a distance. So this implies the instant action at a distance, OK? And this is an artifact of the Coulomb gauge. This is an artifact of the Coulomb gauge. And the underlying physics is certainly not, OK? So for example, one consequence of this is that the causality is not manifest. OK? Because you have instant action at a distance. But remember that phi itself is not physically observable. So we cannot observe phi physically. So it's OK for phi to have an instant action. But of course, the electric field, the magnetic field-- they are causal. Yes? AUDIENCE: So if we impose a Lorentz invariant condition on the gauge, do we not get any [INAUDIBLE]?? HONG LIU: Yeah, we are not. Yeah, yeah. We'll be manifest. Yeah. This also will not break the causality. Just the causality is not manifest because this phi is not physical, directly observable. Yeah. OK. So then that motivates. So that's why the Coulomb gauge are mostly suitable in the case involving low velocities in the nonrelativistic situation. Yeah. So the second-- but in a more relativistic situation, people often consider so-called Lorentz gauge. So the Lorentz gauge is defined to be partial mu A nu equal to 0, OK? So the partial mu x-- now, this equation is covariant because index are contracted. So this is a covariant equation, OK? So this is a covariant equation. And so the Lorentz covariance is manifest. Then the Maxwell equation-- so under this gauge, then the equation star now becomes very simple. OK? It just becomes that. OK? So you see, this equation is indeed covariant, OK? So the derivative contracted. And then you have J mu, OK? So now, this is something we are familiar-- this is like you have four independent scalar fields-- massless scalar fields-- but each of them has a source, OK? So that's the situation which you actually see in your pset, OK? In Pset, you look at the scalar field with the source. And so this is like four independent scalar fields. Each is sourced by J mu. But of course, they are not independent because we have this gauge condition, OK? We have this gauge condition. And also, this gauge condition, of course, is compatible with this equation because if you act partial mu on both sides, and then because of this gauge condition, the left side is 0 and the right-hand side is also 0 from the current conservation. So now, let me also make some remarks on the Lorentz gauge. The Lorentz gauge is very convenient because it gives you seemingly decoupled equations between different A mu, OK? Between different A mu. So the Lorentz gauge also suffers a little bit of a problem, OK? Inconvenience. Because the Lorentz gauge does not fix the gauge completely, OK? So after-- Lorentz gauge does not fix the gauge freedom completely. So there's still some residual gauge symmetry left, OK? So consider A mu. Goes to A mu plus partial mu phi, OK? And if phi satisfies partial mu, partial nu equal to 0-- if phi satisfies this equation, then this preserves the gauge-- preserves the condition. Preserves this condition if phi satisfies the equation, OK? So this tells you that the gauge freedom is not completely fixed, OK? So now, if you look at this freedom, so here it's like you can shift -- lambda is an arbitrary scalar field. It's an arbitrary function. So you can view it as an arbitrary scalar field, OK? With no constraints, OK? With no constraint. We can say it's an arbitrary off-shell scalar field, OK? But now, after you fix the Lorentz gauge, you find that there's still some remaining gauge freedom-- and the remaining gauge freedom corresponding to an on-shell massive scalar field, OK? Still, if phi satisfies this-- because this equation is like an equation of motion for a massless scalar field. And so after you fix the Lorentz gauge, there's still a freedom of the on-shell massless scalar field left, OK? So this is the first remark. So the second remark is related to-- later, we are going to-- how we treat, say, this Lorentz gauge to quantize it, OK? So this is an alternative way. So you can just fix the Lorentz gauge, say, in your equation of motion. Just in your equation of motion, anywhere, you see this thing. You set it to be 0, OK? But we can also actually impose the Lorentz gauge at the action level, OK? So we can fix the Lorentz gauge as the action level as follows. So we consider a new action-- say, a new Lagrangian related to the previous one by a shift like this, OK? And the lambda here-- yeah. Let me not use lambda. So that should be not-- yeah, just xi, OK? So xi here is arbitrary constant. OK? Xi is arbitrary constant. So now, let's look at this new theory we just obtained from the original theory by adding this term, OK? So now, let's look at this equation of motion. So if you look at this equation of motion, then you find it's given by the following form. Partial square A mu. So partial square just means the partial nu partial nu, OK? And then you have lambda minus 1 partial mu-- partial nu A mu equal J mu, OK? You find the equation like this. OK. Yeah. Sorry. Sorry. So you see an equation like this. So you get this new term because of in the Lagrangian, OK? So now, let's act partial mu on both sides. Since the current is conserved, and then when you act partial mu here, you get partial mu A mu. And then yeah, you just combine together. You find you get partial square partial mu A mu equal to 0. So if you act the partial mu on both sides, you find that this equal to 0. So now, we can impose partial mu A mu equal to 0, OK? We can enforce with star star by imposing boundary conditions so that this equation only has trivial solutions, which means that this partial mu A mu is equal to 0, OK? Sorry. I'm running out of space here, OK? So you have this equation. And now, you can enforce the Lorentz gauge by putting the boundary conditions so that this equation only has 0 solution, OK? And then equivalently, then you have imposed the Lorentz gauge. But the nice thing of this approach is that now, you can treat it as action-level. And the advantage, we will see later, OK? Because in quantum theory, the action is very important-- in particular, when you do the path integral. In particular, when you do the path integral. Any questions on this? Yes? AUDIENCE: Could you go into a little bit more detail about what the specific [INAUDIBLE]?? HONG LIU: Yeah. We will do it later in your Pset. [LAUGHTER] Right. Yeah. So we will touch on this question. We will touch on this question when we quantize it. This feature will become very important. And then you will have an opportunity to work out in detail in your Pset. Other questions? Yes? AUDIENCE: Is there a subtlety as to why you-- does it matter when you fix your gauge at the action level or later on with your equation of motion? HONG LIU: Yeah. It's just a little bit easier. Yeah. Later, when we do the path integral, you will see it's easier. Yeah. Classically, there is no difference. But quantum mechanically, we always want to do is at the action level. Yeah. AUDIENCE: [INAUDIBLE] HONG LIU: Sorry? AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. Yeah, we have an extra constant, but the physics should not depend on this constant. Other questions? OK, good. So let me just point out-- when xi is equal to 1, the story is particularly simple. So for general xi, you get this more complicated equation, but for xi equal to 1, you notice that here this term is just 0. And then even with this new Lagrangian, you get exactly the same equation of motion as if you have imposed the Lorentz gauge in the equation of motion, OK? So xi equal to 1 is particularly simple. So xi equal to 1 is particularly simple for a reason. So if you put the xi equal to 1 here, then you will find that this term actually cancel with some of the terms in this L exactly. So actually, when xi is equal to 1, you find that the Lagrangian becomes the following. It becomes 1/2 partial mu A nu partial mu A nu minus J mu A nu, OK? It just becomes very simple. Again, just the action now becomes like you have just a bunch of free scalar fields, OK? Massless scalar fields. OK? Good. Any questions? Yes? AUDIENCE: For the [INAUDIBLE]? HONG LIU: Oh, sorry. Yeah. Yes, so this mu should be downstairs. OK. So just even at the action level, it looks like we have four decoupled massless scalar fields. OK, good. So now, we are ready. So this is a quick review of the classical story. So now, let's discuss how to quantize it. OK. And so again, we will first do the standard canonical quantization, OK? Write down the most general solutions to the operator equations. And then we will discuss using the path integral, OK? Using the path integral-- the path integral is convenient for treating interactions. And the canonical approach is convenient for understanding what's the physical degrees freedom, to understand what's going on physically, OK? So for simplicity, let's just forget about J mu, OK? So let's just look at the Maxwell theory itself. So J mu does not do anything in terms of quantization of the Maxwell theory, OK? So we don't need to worry about it when we quantize this theory. So this theory is quadratic in A mu. So this is what we call a free theory, OK? Because it's quadratic in A mu. There's no interaction, OK? So in the absence of source, in the absence of the charge density or current density, the Maxwell theory has no self-interaction, OK? Has no self-interaction. So the photon is free in the absence of sources. So this is a free theory. But the quantization of this theory is actually very subtle due to the following reasons, OK? So subtleties in quantization. First, is that because of the gauge symmetry, there's a redundant degree of freedom. OK? They are redundant degrees freedom, which are unphysical, OK? Which are unphysical. So we want to only quantize-- should quantize-- only physical degrees freedom. OK? So this is much more subtle at the quantum level than the classical level. So at the classical level, we just fix the gauge, OK? Then that's it. But at the quantum level, things fluctuate, OK? So you have to make sure the fluctuations you look at are corresponding to physical fluctuations, not corresponding to fluctuations of those unphysical degrees freedom, OK? And so that makes the quantum theory more subtle compared with classical, OK? And the second subtlety is that F mu nu is antisymmetric. So the second one is more technical. So this one is very conceptual. It's antisymmetric. So that means there's no partial 0A0 term, OK? Because this is symmetric between the two indices. And these are antisymmetric between two indices. There's no A0 partial 0 A0. Then that means there's no time derivative term for the A0 component in your action, OK? So that means L does not contain time derivatives of A0. So this means that if we look at the momentum conjugate to A0-- we should look at the Lagrangian density divided by time derivative is 0-- this is just identically 0. So again, this is corresponding to a constrained equation. This does not express a canonical momentum in terms of the A0 dot, OK? Or the time derivative of anything. So this is a constrained equation. And again, if you want to treat properly, we have to use constrained quantization, OK? We have to use constrained quantization. And so this leads to subtleties, OK? Remember-- we said since the constrained allowing-- we already discussed for the Dirac equation. And there we avoided going into such constrained quantization by using a trick. And here we will do similarly, OK? We will not directly deal with constrained quantization, but we use some trick to go around this issue. OK? Good. And so let me just write down by passing the canonical momentum due to Ai, then this is Ai dot, OK? Dot is the time derivative. And then so this is corresponding to minus F0i. And then this gives you just Ei, OK? So the canonical momentum corresponding to Ai is just electric field. OK. So it's just electric field. OK? So with this warning, then let's proceed to quantize the theory, OK? So as we said, we want to only quantize on a cross-section, OK? We don't want to quantize those unphysical degrees freedom, OK? Those unphysical degrees freedom. And so we have to fix the gauge, OK? So we have to fix the gauge. OK? So first, let's see how to do it in the Coulomb gauge. So Coulomb gauge is conceptually simple, OK? So in this case, so in the absence of source, let's look at the two equations. And then again, we follow the same strategy as before. We follow the same strategy as before. We first find the most general classical solution. And then we turn that into the quantum operator solution, OK? The same strategy. So in the Coulomb gauge, then we have two equations. And then we just have phi squared equal to 0, and now partial mu A mu, partial mu, partial mu Ai now equal to partial phi, OK? We have these two. So they're two equations with J equal to 0-- and then become the two equations, OK? And also, we have the gauge condition, which means that Ai must be transverse, OK? OK? So here are the equations we have to satisfy. OK. Here are the equations we have to satisfy. So here in this gauge, we don't have this problem, which is a constrained quantization because phi is not a dynamic variable, OK? So we don't have to worry about quantizing it, OK? And in fact, if you look at this equation-- so this is so-called elliptic equation. And then you can impose the boundary condition. So if you require phi to go to 0 at infinity-- spatial infinity-- and then this equation only has identical 0 solution. So the phi-- you can just set it to be 0. So phi is just 0, OK? And so now, since the phi is just 0, we don't have to worry about this canonical-- yeah. We just don't have to quantize phi. And now, the equation for Ai becomes very simple. You just have partial mu, partial mu Ai equal to 0. And now, this just becomes massless scalar field equation. OK? This is just like you have a massless scalar field equation. OK? So now, we can just proceed with the quantization. OK? But remember-- Ai is only transverse, OK? Ai has to be transverse. OK. So now, we impose this as an operator equation. So all this should be interpreted as an operator equation when we quantize it. And so that means that Ai as an operator can only have a transverse component. So now, we can just quantize the theory. OK? So the solution to this, of course, we already know, OK? So we can just now proceed with the quantization. So the first step is that we have to write down the canonical commutation relation. So this means that Ai with pi i, pi j, should be i delta ij, OK? We have to impose this, OK? So this equation-- in the current context, if I write it carefully, it's the Ai t xi t, x, and then Ej t, x prime. So this should be equal to i delta ij delta x minus x prime. OK? Yes? AUDIENCE: So for me to get to, to pick our gauge to start quantizing the theory, is there a reason why we are using the Coulomb gauge here? We discussed it violates causality-- HONG LIU: No, no, no. It does not violate causality, right? Just causality is not manifest. It does not violate causality. AUDIENCE: What does that mean, not manifest? HONG LIU: It means that the-- so manifest is at every step, you see the causality is preserved. And not manifest is that you have to check it to make sure causality is preserved. Yeah. So I say this equation is covariant. I don't have to check it. I know this is covariant. I look at the equation. It's covariant, OK? But then if I look at this theory, I don't know that theory is covariant. But this theory is actually covariant. So the way to check if it's covariant is you check its observable quantities. Yeah. But phi is not observable quantities. So that's why we say it's not manifestly causal. Yeah. So Coulomb gauge is a convenient gauge to use, even quantum mechanically. Yes? AUDIENCE: So the canonical momentum here is not related to the field momentum E cross B, not related to-- we have the momentum carried by the field E cross B? HONG LIU: No, no, no, no. No, it has nothing to do with that. No, no. This is the canonical momentum conjugated to the field. And that momentum you're talking about is the momentum carried by the electromagnetic field. It's the spacetime momentum carried by the electromagnetic field. So that's the analog of the Noether charge. That momentum is the analog of the Noether charge. Yeah. For the scalar case, yeah. Yeah. Remember-- in the scalar case, there are also two momentums. Yeah. Other questions? OK, good. So this is wrong, OK? So if we just naively write down this canonical commutation relation, this is wrong because this equation is incompatible with the Lorentz gauge condition, OK? So imagine you just add the derivative on this-- to the derivative on x. And then since this is operator equation-- and then yeah. Yeah. Yeah. So let me call this equation 1. So if act partial i on 1, then the left-hand side, I have A t, x and Ej t, x prime. On the right-hand side, then I have the delta function. Let me just-- partial i delta. OK? And the right-hand side is non-0. It's just a derivative of the delta function. But the left-hand side, according to this, should be 0, OK? So we have a contradiction. OK? So this equation cannot be right. And the reason is simple. It's because this equation also includes the longitudinal degrees freedom. But we said-- Ai can only be transverse. But this equation actually is three components. It also included the longitudinal. So we should not include that, OK? So what we should do-- we should only look at the Ai T, OK? So in fact, we should look at Ai T t, x and its conjugate momentum, which is called pi j T t, x prime. So this should be i delta ij delta 3 x minus x prime, OK? So we should look at only the transverse component, OK? And the pi i-- the conjugate momentum for the transpose-- Ai-- if you start from here, so you convince yourself from here you can find it from yourself, OK? This is just equal to partial 0 Ai T, OK? Just partial 0 Ai T, just the time derivative. So remember-- here Ei is equal to partial 0 Ai minus partial iA 0. And this part is 0 because phi is equal to 0. And then here if you take Ai to be transpose and then pi transpose to-- yeah, equal do that, OK? But this equation is still not correct because the left-hand side is transverse, but the right-hand side is not transverse, OK? So here we have to impose a transverse projector. So rather than write it as delta ij, what I should do is I write it as a transverse projector. OK? So Pij is that when you act the Ai T-- so Pij is if you act this on Aj, it projects into the Ai T. OK? So this is the transverse projector. OK? And then now, this is a consistent equation, OK? Now, this is a consistent equation. OK? So this transverse projector, you can easily write down in momentum space. But in coordinate space, it's a little bit awkward to write it down. So formally, in coordinate space, this Pi jT can be written as following-- as delta ij minus partial i partial j divided by delta squared. OK? And so you can easily understand the meaning of this equation because when you act on the-- yeah, yeah, yeah. Anyway, you can check yourself. This works, OK? And to understand what this-- you can check formally yourself. This works, OK? Give you to the transpose. You can understand this equation by going to momentum space. You're going to momentum-- this just becomes ki, this kj. This just becomes k squared, OK? This is just ki times kj divided by k squared, OK? And so the coordinate space definition of this is just corresponding to the Fourier transform of the momentum space expression, OK? Just formally write it as this, OK? Just formally write like this. Are there any questions on this? Yes? AUDIENCE: So we don't really understand the operation of 1 over del squared. We just say this is the coordinate space representation of the momentum space? HONG LIU: Yeah. Yeah, yeah. We just use this to denote-- use notation to denote what we mean by the Fourier transform of the momentum space expression. Yeah. Yes? AUDIENCE: Do you mean the inverse of the Laplacian when you write 1 over grad squared? HONG LIU: Huh? AUDIENCE: Do you mean the inverse of the Laplacian when you write something something something over grad squared? HONG LIU: Yeah. Yeah, this is just formal notation to make-- once you can check-- say, if you have something like this partial i Aj partial-- yeah. So this is a longitudinal part of A. Yeah, yeah, yeah. So one thing you can check yourself is, say, if you act this on the partial j Ai, partial j Ai-- and then this will cancel with the downstairs. And then that gives you the transverse part. Yeah. Anyway, just try to treat this as a notation, OK? So momentum space is very easy to understand. OK. So this is now our canonical commutation relation. And now, we only quantize the physical degrees freedom, OK? We only quantize the transverse degrees freedom. And everything else-- we don't worry about it, OK? Everything else-- we don't worry about it. And the transverse part just satisfies the standard massless equation of motion, OK? So remember-- the transverse part just satisfies the standard equation of motion. So now, we can just immediately write down the most general expansion of Ai T. We just write down the most general theory that satisfies this-- yeah. This is just like a massless equation. So we can just write what we do previously. OK? And then we can just write down-- so previously, we just say ak i k x plus ak dagger minus i k x. And the omega k in this case would be just equal to k, OK? Be good for massless. But now, here we have two independent components, OK? So we have two independent components. So essentially, we have two, actually, independent solutions, OK? So now, let me parameterize the two independent solutions by a polarization vector-- to write more generally by polarization vector like this. r equal to 1 to 2 epsilon i r, OK? And then this is r. This is r. OK? So epsilon r-- 1 to 2 are the basis of transverse vectors, OK? Because Ai has to be transverse, OK? So essentially, you can treat it as polarization associated with this ak, OK? So by definition, this should satisfy, say, ir ki equal to 0. So this should be transverse, OK? So we also define them to be orthogonal to each other. So r and s here are just 1 2, OK? So this is orthogonality normal to each other. Orthogonal, OK? And so they also satisfy the condition. So if you have two independent vectors, then when you sum them together, then you should get the projection operator, OK? So you should get the Pij T k, OK? So epsilon i-- they are k dependent. They depend on the spatial momentum because they have to be orthogonal to the momentum. OK? And this is just the momentum space version of this. This is the delta ij. ki kj squared, OK? So this is orthogonal condition. This is a completeness condition. OK? It's a completeness condition. Any questions on this? So now, this is the expansion for Ai, OK? So the only difference from the scalar case-- is that now, here you have two independent vectors. And now, we have allowed the general polarization associated with the-- yeah. Have parameterized the general solution using two independent polarization vectors, OK? So now, you can now plug this back into here. Now, you can find the commutation relation between and A and A dagger. Then you find, as you would expect, that they satisfy the standard. So from the commutation relation, we find that the akr and ak prime s dagger is equal to delta rs. OK? With the rest 0, OK? So r and s equal to 1 2. And the others are 0. OK. OK. So now, we can write down the Hilbert space. We obtain the Hilbert space again just from what we do. So we specify a vacuum requiring this-- just ar k annihilate the vacuum for any k and r equal to 1 and 2. And then the single particle state is given by just ak r and the dagger acting on 0. So we denoted this as k and epsilon r, OK? So this describes a single photon state. This polarization vector given by epsilon r, OK? So remember-- epsilon r is the transverse vector, OK? And you have two independent of them. Yes? AUDIENCE: Should there be some normalization, like square root 2 [INAUDIBLE]? HONG LIU: Yeah. I believe so. Yeah, yeah. Let me just put the proportionality. Right, yeah. Thank you. OK. So yeah. So that concludes the story for quantization in the Coulomb gauge. So just to summarize, for the Coulomb gauge so the Coulomb gauge is conceptually pretty simple, OK? So the advantage of the Coulomb gauge-- we said we can just directly quantize physical degrees of freedom, OK? Because the unphysical degrees freedom, phi, just automatically is equal to 0. And then the longitudinal part, we can just throw away by hand using the Coulomb gauge by promoting the Coulomb gauge condition as the operator equation. And then we can just solve it by hand. And then we just have the transverse part. And then once you have the transverse part, you can just quantize it then as a free massless field, OK? But the drawback, of course, is always not manifest Lorentz invariant, OK? Not manifest Lorentz covariance. OK? Any questions on this? Good. So now, let's look at the Lorentz gauge. So these two are the most commonly used gauges. And also, these are two representatives, OK? They're two representative gauges because by quantizing them, they're involving completely different procedures. And by contrasting what's happening in the Lorentz gauge-- and what's happening in the Lorentz gauge actually will be very instructive, OK? So we will just consider these two cases. So now, let's move on to the Lorentz gauge. So in the Lorentz gauge, we will use a somewhat different strategy because in the Lorentz gauge-- so first is that solving the gauge condition becomes more difficult, OK? And also, in the Lorentz gauge, there is a residual gauge freedom, OK? So there are two things. So in the Lorentz gauge, instead, we will start with this action. So we will start with this action. We will start with this action, which we say classically can be used to fix the Lorentz gauge, OK? And when psi is equal to 1, so this just becomes partial mu A nu and partial mu A nu, OK? So this is for xi to 1. So you say, this theory-- we know how to quantize. These are just four decoupled massless scalar fields, OK? For psi equal to 1, it's particularly simple. We just have four-- yeah. Oh, by the way, I should mention-- once you have that form, and once you have this commutation relation, and once you have this relation, you can find any correlation functions of A, OK? Two point functions-- propagator, Feynman propagator, retarded propagator. You can find all of them explicitly. So now, this theory seems to be very simple. This just decoupled four massive scalar fields, OK? So we can just straightforwardly write down its canonical momentum. So pi nu-- this is just equal to A nu star, A nu dot. OK? And the Hamiltonian density-- just given by 1/2 A nu dot A nu dot plus-- just like you have 4 independent massive scalar fields with the equation of motion given by this partial square A nu equal to 0, OK? So now, you can just completely take over, OK? We can just completely take over what we did for the massless scalar, OK? To write down the answer for the massless scalar-- but we're running out of time today. So next time, we will talk about-- so now, you can just treat it as four massless scalar. But then we have a problem, OK? Because we just need the Coulomb gauge. We only have two physical degrees freedom. But here we have four, OK? So somehow, we have to get rid of two, OK? And then we will find the ways to get rid of the two, OK? OK, yeah. So yeah, let's stop here for today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_18_Discrete_Symmetries.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: OK. Let us start. So last time, we started talking about discrete symmetries. So if you have a complex scalar theory-- so if you have a complex scalar theory and then, in addition to U1 symmetry, classically there are also discrete symmetries-- firstly is parity. So we normally denote with a symbol P and the act on x mu by t taken to minus x. OK? So you just flip all the spatial direction. And you also have time reversal. OK? Time reversal. And then I normally write as a script T on x mu, then take to the minus t and x. OK? It does not do anything with x. And then we also have a charge conjugation, which takes phi and goes to phi star. OK? And so of course, our goal is to understand the symmetries at the quantum level. OK? So at the quantum level-- so you're already-- then in quantum mechanics, symmetries are-- symmetry transformations. So they are corresponding to unitary, OK, transformations. OK? OK? They corresponding to unitary transformation. We already have done that for continuous symmetries. And the same thing is with the discrete symmetries. And in quantum mechanics, the implementation of the discrete symmetries are also through some unitary operator. And so there's a slight subtly with the time reversal because all the other symmetry transformations corresponding to the unitary transformations, except for the time reversal. That actually is corresponding to the anti-unitary transformations. So have you seen that in quantum mechanics? OK. OK. Let me just review it here. So yeah. So previously, I want to say "recall." But we don't have to recall it. So say-- so let's denote the UT as the unitary operator. So this is the unitary operator for time reversal. OK? And then the claim is that this has to be-- this should be anti-linear. And it's not actually ordinary unitary. It's called the anti-unitary. OK? And so now let's try to explain where those things come from. So suppose that you have a system which we say is time reversal invariant. So by definition, the symmetry is that the corresponding transformation of the symmetry should commute with your Hamiltonian. OK? So this is what we mean by the system to be time reversal invariant. OK? So here, I'm just talking about it just in quantum mechanics. OK? Let's not even worry about the field theory. So now-- so suppose we have a time reversal invariant system, which the time reversal operator commutes with your Hamiltonian. And then let's act this UT on the wave function. OK? Then you get some psi prime. So by time reversal, we expect physically, OK, if psi satisfy the standard Schrodinger equation, and then psi prime should satisfy the Schrodinger equation with t goes to minus t. OK? It's the same Schrodinger equation, but with t goes to minus t. OK? So that's what we mean by time reversal. OK? So now let's try to do that. So let's start with this equation. And now let's just act by UH from the left on this equation. OK? So the left-hand side has UH. Yeah. Yeah, it means that-- yeah, yeah. Before too [? I leave it ?] do one more step-- which means that the psi prime should satisfy the following equation. OK? So you just take t equal to minus t. And then you get the additional minus sign. OK? So psi prime should satisfy this equation. So now let's see how you get this equation from here by using this relation, OK, by using this relation. So this is simple. Let's just add a U on the left of this equation. So let me call this equation star and this one star-star. So if I add UT on star, then on the right-hand side, UT commute with H. OK? So on the right-hand side, we just directly get UT H psi. And we can just commute them. So this is the same as H psi UT-- UT psi and this is just psi, psi prime. So the right-hand side just gets psi prime. OK? And then from the side, you get UT i partial t psi. OK? So in the ordinary situation, we would have trouble. So if you-- because this is the operator. i is a c-number. And the partial t is some number. So you can just directly pass through this i and the partial t. But when UT act on psi, then you get the same equation rather than the equation with a minus sign. OK? Then you have trouble. And the only way to get that equation is to require-- we require-- when UT pass through i, take it to minus i. OK? We impose this condition. When UT pass through i, you get a minus i. OK? So this means-- so we can define anti-linear operator. Anti-linear operator is defined to be the following. It means that if you have some operator a act on some number c, when you pass through that number c, you get c star a. OK? So anti-linear operator a is defined to have this property. And c is just an ordinary c number, OK, an ordinary c number. So you see here UT. Is order for UT to be ready to be a time reversal operator. And UT had to be anti-linear, OK, so that, when you pass through i, it takes i equal to minus i. Because the minus i is the complex conjugate of i. OK? Good. Any questions on this? Good? So yeah. So UT should be anti-linear. OK. So also, for anti-linear operator, the adjoints should be defined differently. Yes? AUDIENCE: [INAUDIBLE] you started off with trying to show star-star using this-- how UT acts on star. But then you kind of inserted this requirement ad hoc to satisfy star star. HONG LIU: That's right. Yeah, that's right. AUDIENCE: So I'm a bit confused. Then how do you come up-- why the star-star [INAUDIBLE]?? HONG LIU: No, the star-star is the definition of time reversal. Right? Star-star is what physically we mean by time reversal. If you want something to implement the time reversal, it has to achieve star-star. Yeah. Other questions? OK. Good. And then we derived that in order to achieve the time reversal, then UT has to be anti-linear. And then because ant-linear operator has this weird property, the adjoint of anti-linear operator-- the definition should also be changed. OK? So the adjoint of anti-linear operator A is defined to be, say, if you have psi, A dagger chi. OK? If A is an ordinary operator, and then the definition of the adjoint is that this is equal to A psi chi. OK? So this is a standard definition. Yes? AUDIENCE: Yeah. So I was just confused about how you deal with making multiplication, scalar multiplication noncommutative and I was wondering why you couldn't do-- wouldn't it make sense to just say that UT and H anti commute? HONG LIU: UT and H anti-commute? So by taking the-- yeah, you can also do that. It just does not work as well. Yeah. You see whatever it works. Yeah, yeah. It's much easier just to make the UT to be anti-linear. Yeah. Yeah. Yeah, yeah. For this particular equation, it can work. But then you have to be against the principle that somehow H commute with the symmetry. Yeah, yeah. That is considered to be more sacred a principle. Yeah. Other questions? And indeed-- yeah, yeah. Right. OK. Good. So the way we define the adjoint should also be different. So this is standard definition. So for anti-linear, you have this definition. And then you put the additional star. OK? You put an additional star. And the reason you put the additional star is precisely due to this property. OK? Because remember, if you put a complex conjugate, put the c number here, and then you should be able to take the c number outside this kind of overlap. But in order to be compatible with the property of the anti-linear operator, then you have to define the adjoint to be this way so that you have a consistent story. OK? Just I will leave it to you yourself as an exercise to show this is actually the right definition. OK? And then we can define the anti-unitary as that, if you have an operator with anti-unitary, it means U phi, U chi. So standard unitary condition is just that this is equal to chi-- and the psi and the chi. OK? And now for the anti-unitary, again, you just put the star. OK? You just put the star. OK? So the claim is that the UT, the time reversal operator, should be an anti-unitary operator, uh, in order to be a symmetry, in order to be a symmetry. OK? So this is just a brief review of time reversal in quantum mechanics. So now let's look at the quantum version of those symmetries in quantum field theory. OK? So first, let's do the warm-up. Before let's do it for the Dirac theory, let's warm up for complex scalar fields, which is much simpler. OK? So in this theory, the action is invariant, we said, under those symmetries. OK? If we just flip the spatial direction, clearly that Lagrangian is invariant. If we leave the time direction because the time derivative is quadratic, it's also invariant. And again, if you change phi to phi star, the Lagrangian is invariant. OK. So quantum mechanically, those symmetries should be implemented by some unitary operator. OK. So now the parity transformation on the field to the corresponding to you start with phi x. And then you go to phi prime x. So phi prime x should be equal to phi px. OK? This means that the-- and then just put-- yeah, you can also put the phase here. OK? So you can show that this transformation certainly is a symmetry of this Lagrangian. OK? Yeah, because as I said, if you just flip it, it does not change anything. But you can also put an arbitrary phase here in principle. OK? You can in principle put an arbitrary phase here. Because they all only depend on the phi and phi dagger. OK? And so this is p. And then this is the t. Then corresponding to phi x, then it goes to some phi prime x equal to, again, some phase-- so phi tx. OK? And then the charge conjugation normally is called C. So parity is normally called the P. OK? So here, it's corresponding to phi x, goes to phi prime x equal to eta P phi star x. OK? So in each case, such kind of transformation should be generated by some unitary operator. So for example, this should be generated by UP phi x UP dagger. OK? And then similarity for this one, we call the UT-- UT dagger. And here, you see it in UC dagger similarly. OK? So at the quantum level, this should be generated by some action of some unitary operator. Yes? AUDIENCE: Do you always have this U1 symmetry or do you have -- is there an interacting theory where you have one power of phi. But we don't have-- HONG LIU: Sorry? AUDIENCE: So you can add an arbitrary phase? HONG LIU: No, no. I'm going to talk about the phase. OK? I'm going to talk about the phase. So if you are even looking at that Lagrangian, and then you see you put the phase, this is invariant. OK. But now the question is, can those phases be arbitrary? OK? The question is then, can those phases be arbitrary? So in the case of this complex scalar, actually it should not be. So let's use the example of the parity. So we know that when you do parity twice, OK-- if you do parity twice, you should go back to your original. Yeah, your system does not change. OK? If you just do reflections while your system does not change. So that means that, when you do this twice, you should-- you haven't changed anything at all. OK? So when we do it twice, the phi should come back to itself. And indeed, this P-squared, if you do it twice, comes back to itself. But then this phase should also come back to itself. OK? So that means that the-- eta P-squared should be equal to 1. OK? So if you wanted the symmetry to come back to itself and then eta a P, then it can only be plus-minus 1. But it actually can be minus 1. OK? It does not have to be plus 1. So recall when eta P is equal to plus 1, we call it a scalar. When eta P is a minus 1, we call it a pseudoscalar. We call it pseudoscalar. OK? And a similar thing can be said about the eta t and eta c. OK? So by definition, this unitary operator acts on phi. It should act as this. So now we can-- and now remember the phi is expressed in terms of the creation and annihilation operators. OK? So by requiring-- yes? AUDIENCE: [INAUDIBLE] scalar or pseudoscalar is it true that if eta P is equal to one then eta T and eta C also be 1? HONG LIU: No, no. They don't have to be. No, no. They don't have to be. They can be-- say this equal to 1. eta T can be minus 1. Yeah, yeah. They don't have to be. Yeah. AUDIENCE: [INAUDIBLE] HONG LIU: Yes? AUDIENCE: If eta is one, its a scalar in what sense, I guess? HONG LIU: Huh? AUDIENCE: It's scalar in what sense? Especially for the other eta's don't have a [INAUDIBLE] Like, I don't understand what is a scalar it is equal to 1. HONG LIU: You just say it's a convention. It's a name. Yeah, we just want to distinguish these two cases. So this is-- so when you say it's a scalar, we should say this is the ordinary scalar. s this is called a pseudoscalar. Yeah. We just want to distinguish these two cases. Yeah. Other questions? Yes? AUDIENCE: So if [INAUDIBLE] HONG LIU: Yeah. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. So it's getting complicated. You can you can try to invent eight names for these, where there are eight possibilities. And people normally don't bother to invent eight names for it. And since the parity is often used, and so they just talk about the parity. OK? They just define it this way. We normally don't give a name, let's say, in this case. Yeah. You just specify whether it's 1 or minus 1. And so the reason we give name for this is because of the pion. And pions are pseudoscalars. OK? They actually transform under the parity-- by minus sign. The Higgs would be the ordinary scalar that transforms as 1. Good? So if I have this expression-- so by requiring UP phi dagger phi x UP dagger equal to that thing-- yeah. You could do this, say, plus-minus eta P phi Px. So by requiring this, we can work out how UP act on a and b. OK? We can work out how UP worked on a and b. So you find that the parity acting on ak UP dagger equal to the same eta P factor. And then take it to a minus k. OK? So this makes complete sense because we integrate the k as momentum, right? So under reflection of a spatial direction your momentum also change direction. And so it takes ak to get to a minus k. And then, similarly, you can work it out the UT acting on ak UT dagger. It gives you eta T, also a minus k. OK? And similarly, with the bk. OK? So this also makes sense. When you reverse your time direction, your momentum also changes sign. OK? Your momentum also changes sign. So now the charge conjugation is a little bit more interesting. OK? So the UC for the charge conjugation is a little bit more interesting. So the charge conjugation takes ak dagger equal to eta C equal to b. OK? And similarly, it takes the-- yeah. So you see that what charge conjugation does is to exchange particle and antiparticle, OK, exchange particle and antiparticle. Good. Any questions on this scalar field? Good. So now let's look at the-- with this scalar field story, now we can look at a Dirac field, which is more intricate, OK, which is more intricate. So let's write down the-- so let's forget about the previous thing. Let's write down the action for the Dirac theory. OK? And then let's also write down this equation of motion. OK. Now let me call this equation star and erase the earlier star. So now let's first understand how the parity should act on the Dirac field. OK? So now, actually, everything with the Dirac field story becomes tricky. OK? It becomes tricky. So naively, if you wanted to just try to generalize the scalar story, we say, let's imagine the Dirac field should transform. OK? I'm just-- to say for the parity, for example, psi Px, OK, with some phase, OK, maybe with some phase. So we want to naively generalize to the scalar story, that would be what you do. OK? That should be how the parity transform. But now remember psi now is a four vector. Nobody tells you that, under parity, the different component of this four vector cannot get reshuffled. OK? Remember, under Lorentz transformations, they do get reshuffled. OK? In Lorentz transformations, remember that for spinor the Lorentz transformation not only act on x but also on the internal space of the spinners. So you would expect the most general parity conservation should also act on the internal space. OK? So we would expect the phase is not enough. So we actually have to put a matrix here and that's called D. And whatever phase will be absorbed in this D. OK? Of course, you'll also include the phase. And yeah. So for notation convenience, I will also write it as psi prime x prime equal to D psi x. OK? So the x prime equal to Px. OK? So these two are the same equation. OK? So you just write, D is a matrix acting on spinor space. OK? So in what sense we say the Dirac theory is parity invariant? OK? Let me just formulate it in terms of the equation of motion. You can also equivalently reformulate it in terms of the action. OK? Yes? AUDIENCE: [INAUDIBLE] HONG LIU: Oh, we will talk about the behavior. This is a good question. It does not have to. And we will talk about the property of T. Yeah. Do you have other questions? Good. So what do we mean by the theory is parity invariant? OK? By the theory-- we say Dirac theory is parity invariant. We say if Dirac theory have a parity symmetry, you can always define a parity operation. So when we say Dirac equation have a parity symmetry, it means that if we start with a psi which satisfies the Dirac equation, and then there should exist such a D, such a matrix, such a transformation so that at this transformed psi still satisfies the Dirac equation but in the reflected space. OK? So you want to reflect. So x prime goes right into the reflected space. And as we say the theory have a parity symmetry-- it means in the reflected space you should have the same equation. OK? You should have the same form of the equation. So that means that this psi prime, when you're writing it in the reflected space, should have the same form of the equation as the original Dirac equation. So this is the same concept as this Lorentz covariance we described earlier. OK? Yes? AUDIENCE: Why doesn't the parity transform the gamma matrices matrices? Because we changed d mu but not the gamma? HONG LIU: Yeah, yeah. Gamma matrix is just some numbers. Right? Because the gamma matrix is always just some numbers. They're not spacetime variables. And so they're not-- so they don't change under the parity. Other questions? Yes? AUDIENCE: Is there any reason why D has to be a matrix? Why does it have to be linear? HONG LIU: No, no. D does not have to be a matrix. It's just that D can be a matrix. But we want to find the D so that we have this. But D can be identity. AUDIENCE: Could it be some complicated function, like non-linear? Because we already allowed for non-linear, like, time reversal operators [INAUDIBLE] HONG LIU: No, no. The time reversal is linear, right? Time reversal is linear. AUDIENCE: Antilinear. HONG LIU: Yeah. The action of a-- normally, action of a-- yeah, this kind of-- just go to psi itself. But you can-- yeah. You're asking why not-- why we don't use a psi squared term? Or you are asking whether the D can depend on spacetime? Which one are you asking? AUDIENCE: Yeah. I mean, psi-- like, some arbitrary function of psi. HONG LIU: Yeah. Yeah, but this is the simplest one. Yeah. Yeah, if we can make this work, then you don't have to do that. Yeah. You do the simplest one first. Also, we do believe that this kind of parity operation should be a linear operation. Yeah. Yeah, if you add the two operators together, you do the parity, and you should-- after you do the parity should be still summed together. If it's a non-linear function, then you won't satisfy that property. Yeah. Yeah, just from our experience, the-- yeah, just physically, we expect the parity to be a linear operation. Good? OK. So we want to find the D so that this is satisfied. OK? So that means that the-- so this is easy. So now let's just look at this equation a bit. OK? So let's write it more explicitly. So partial mu prime-- yeah. So we write this explicitly. So partial 0 prime is the same as the original partial prime because you don't flip the time direction. OK? And then you flip the spatial direction. So that means this becomes a partial i, partial i -- gamma i partial i. And so I remove the prime, but I have a minus sign here, OK, because you've changed signs. And the minus n-- and this one from our definition should be equal to this. So this is just equal to D psi x equal to 0. OK? OK? So now if you compare with this equation-- so the-- so now if you compare with this equation, now let's add the D on this equation. So that corresponding to D-- gamma 0 partial 0 plus gamma i partial i minus m psi x equal to 0. OK? So this is the original equation. And now we want these two equations to be equivalent. OK? Because given this equation, we should have that equation. OK? And so we want these two equation to be equivalent. So in order for them to be equivalent, you see the last term is the same. Because the m is just a number that commutes with D. And then in order for these two equations to be the same, we want D to commute-- it means that D should commute with gamma 0 but anti-commute with gamma i. OK? Because when you pass through here to bring it to that side of gamma i, you should change the sign. OK? So now you can immediately write down such a matrix. Then D has to be-- it commutes with gamma 0, anti-commutes with gamma i. So it has to be proportional to gamma 0. OK? And then you can put the phase here. And you can put the phase here. And so this is how it transforms. So now you can further constrain eta p. Now you can further constrain eta p. But here, the story is a little bit subtle. We are not going through there. So you said the fermion-- remember when we already discussed in your quantum mechanics, when you have a spin-1/2 particle, when you rotate 360 degree, we actually don't have to go back to itself. You can go back with a minus sign. OK? And so here-- and so yeah. And so here, there's an intricate story you can discuss what are allowed eta p etc. And so we will not go into that. But I think you know the basic idea. OK? Good. Any questions? Yes? AUDIENCE: So isn't this just be, if you want these equations to be equivalent, that could be satisfied with gamma 0 is real gamma i is pure imaginary because of this antilinear? HONG LIU: Wait. What do you mean by-- no, no, no. These equations should be applied for any gamma 0, for any gamma i. AUDIENCE: Right. [INAUDIBLE] HONG LIU: No, you cannot. Yeah, yeah. Because in parity, you should work in any representation. AUDIENCE: OK. HONG LIU: OK? So this is the story for the parity. So completely similar idea for charge conjugation and the time reversal for fermions. And so I will do it a little bit faster. OK? So I will do it a little bit faster-- mostly, again, just outlining the same idea, but then just write down the results. So for the charge conjugation-- now let's do the charge conjugation. Did I say-- yeah, sorry. Here, I should put a thing here with parity. Because we are doing parity here. So now let's do the charge conjugation. So for the scalar case for the charge conservation, just take the phi star. Again, so for the Dirac field for psi prime, again, this should be proportional to phi star, OK, to phi star. But so a simple way to construct phi star is you take the psi-bar. OK? And then you take a-- transpose. OK? And so this is some linear combination of the phi star. And also, from what we discussed before, you should allow a matrix here. OK? You should allow a matrix here. And the C, again, is a matrix in the spinor space. OK? And now, again, we need-- so the statement of the charge conjugation is the statement that, given there exists C, there exists C such that given equation star that phi prime will satisfy the same equation. OK? psi prime satisfies the same equation. OK? So yeah. So we just do it. OK? So we just start from here. So since the psi prime are related by taking the bar, and the complex conj-- and transpose-- so let's then first do the bar for this equation and then do the transpose. OK? So when you do the bar operation from that equation for the star, then give the bar equation [INAUDIBLE] psi bar-- yeah, you can easily check yourself-- equal to 0. OK? Now you do a transpose. And then you get gamma mu t partial mu plus m psi t. OK. So then the psi-bar t is just equal to C minus 1 psi prime x equal to 0. OK-- because from here, by definition. OK? So now you want this equation to be equivalent to that equation. OK? So as we do before-- a similar idea from that. OK? You just act C minus prime here. OK? So in the end, you conclude that the gamma UT-- so from here, the equivalence of these two equations, you conclude that gamma mu t should be related to gamma mu by this C matrix. OK, negative. So if you can find the C, satisfy it to this equation, and then they are equivalent. OK? OK? So does such a matrix C exist? So we can say actually such a matrix C always exists for the following reason. Because you can check given gamma mu satisfy this Clifford algebra for gamma matrices and the minus gamma mu t also satisfy the same algebra. OK? So this will do that. So that means the minus gamma mu t, OK, is also another allowed set of gamma matrices. OK? We got to satisfy the same algebra. And now we discussed before, mathematically, we can prove all representations of the matrix are equivalent. And that means there must exist some matrix c which relates to these two. OK? And c must exist. OK? So this tells you c must exist. So now it's not so easy to write down the C explicitly in this case. OK? So here, we can write the expression for D regardless of the choice of gamma matrices. And here, it's actually not so easy to write the explicit form of C. OK? So of course, for specific choice of representation of gamma, you can write down C. OK? But it's enough for us to know that C exists. OK. And then this is the transformation for the charge conjugation. OK? It's the-- it transforms this way with C satisfy this condition. OK? C satisfy this condition. Are there any questions on this? OK. Good. And then we'll talk about time reversal. The idea is, again, similar. So now we use UT. OK? So now suppose the psi prime is equal to, again, some matrix and then psi tx. OK? This is the direct analog of this transformation. OK? And they're the same as the psi prime x prime equal to DT psi x. And now x prime is equal to Tx. OK? You just flip the time direction. You can just flip the time direction. And now, again, the statement of the time reversal-- again, the statement of the time reversal-- Now the statement of the time reversal is that, given star, then it's again that the gamma mu partial mu prime minus m-- then you equal to psi prime x prime equal to 0. OK? Again, this new frame equal to 0. OK. And then the story will be similar. OK? The story will be similar what you-- and then, yeah, you just again try to match starting from that equation. And then you try to match those equations. OK? So as I said, we'll not go into detail. And in the end, you find that DT should satisfy this equation. OK? So you find DT should satisfy this equation. Our gamma mu t-- again, gamma mu transpose is equal to DT transpose gamma mu DT minus 1 transpose. OK? So gamma mu t has to satisfy this equation. OK? So now if you compare these two equations-- that and this equation-- they just differ by a minus sign. OK? They just differ by a minus sign. So we conclude that we can do this by taking DT transpose equal to C minus 1 gamma 5. So the gamma 5 will generate this minus sign. But when you commute with gamma mu, and then that will account for the extra minus sign. OK? That will account for the extra minus. So if you know C, then you also know this time translation symmetry matrix. OK? Yes? AUDIENCE: [INAUDIBLE] HONG LIU: Yeah, yeah. Yeah, because of the-- only the operator itself, only U is anti-linear. Right? t just goes right into the specific matrix acting on the spinner space. And so that's just an ordinary matrix. Yeah. Yeah, DT is just ordinary numbers. They're made up from the gamma matrices. Yeah. Other questions? Yes? AUDIENCE: So is parity a response to the reflections in space? HONG LIU: Yeah. AUDIENCE: And time reversal-- is that a reflection in time? HONG LIU: Yeah. AUDIENCE: So can you, like, boost into spacetime? If you make a Lorentz boost, could space reflections then-- HONG LIU: No. No, they cannot relate these two. They are independent, discrete transformations. They're not related to a boost. Yeah. They are not related by a boost. Yeah, if they're related by a boost, then we should only need to worry about one of them. Because the boost we already understood. Yeah. Yeah. So they are independent discrete transformations. They are not related by any other continuous transformation. Other questions? Yes? AUDIENCE: There is only one parity transformation [INAUDIBLE]. HONG LIU: Yeah, that's right. Yeah, yeah, yeah, yeah. Just in any dimension, you have just one. Yeah. Good. Good. OK. Yes? AUDIENCE: Yeah, I don't know if this makes sense. But is it possible to have a situation where in one frame a quantity is conserved through parity or time reversal transform, but then in a Lorentz boosted frame it's different than what [INAUDIBLE] or if it's conserved in one, it's conserved in all? HONG LIU: Yeah. Yeah. If it's conserved in one, it's conserved in all of them. Yeah, it's because the conservation equation is a covariant equation. Yeah, because the conservation equation is partial mu, J mu mu equal to 0. So this equation have the same form in any frame. Yeah. Yeah, but the charge is different in different frame. Yeah, they're related by the transformation. Just in any frame, you can define a conserved charge. Yeah. Good? OK. So now let's move on to the next topic. So now, finally, we'll talk about path integrals for fermions. And with path integrals, then we can do interactions. OK? Remember, when we know how to do path integrals for scalar, then we can easily do interactions. And now let's do the path integral for fermion. And then that will give us a very simple way to treat interacting theories. OK? So let's just recall in the path integral we considered before-- so if you have ordinary quantum mechanics-- so you have x and the p commutator equal to i. And then the path integral, you just integrate over all configurations of xt. OK? So that's what you do in quantum mechanics. So now xt is just an ordinary function. OK? You just integrate over all possible trajectories. And xt is just an ordinary function. So now when we go to quantum field theory, then we have phi then commute with-- yeah, yeah, yeah. So let me just put a hat just to show, to emphasize that this is our operator here. And the phi-- we have tx. And then we have pi phi, the conjugate momentum. And again, this is equal to i something, OK-- the delta function. OK? OK. This is i. And then you have delta function. And then when you do the path integral, you just integrate over all configurations of phi xt. And now this is just a classical field. Now, essentially, you view this just as an ordinary function. OK? You integrate just all possible values of it, OK, all possible functions of it, OK, all possible functions of phi. So now we have to do fermions. We have to do psi. So psi is a little bit weird. Because as we discussed before, the psi-- we no longer have the commutator. You actually have an anti-commutator. OK? So what do you do? OK? Do you still just follow the same rule, or do you actually have to change the rule? OK? So that actually was a question which-- so Feynman actually-- he came up with this idea of the path integral in 1948. And then, of course, generalization to scalar will be immediate. OK? But then he wanted to generalize to fermions. But then for a long time, he couldn't do it. And if you just use this method to do for fermions-- and they just didn't work. OK? We will not go through that exercise. If you does this for fermion, it just does not work. So for many years, he couldn't find the answer. But then around the 1962-- I think it's 1962 or 1965. Anyway, just around some early 1960s, a former Soviet physicist actually came up with a brilliant idea to solve this problem, OK-- and, like, 15 years later after the discovery of path integral. And so the basic idea is that when you have a Dirac field-- so when you have a Dirac field, then you have anti-commutation relation. psi and pi psi is an anti-commutation relation. OK? It's an anti-commutator. And then this guy called Berezin. Berezin then postulates that, when you do the path integral, what is-- in the path integral will be an analog of the classical version of this field, OK, the classical version of this operator. So then what would be the classical version of this anti-commutator? And he had just written it just should be some quantities which anti-commute. OK? So when we say classical, we put it in quotes because we only do this in the path integral. And we're then, of course, corresponding to anti-commuting fields, objects. OK? And then when you do the path integral, you just integrate over psi. But psi is an anti-commuting object. OK? It turns out that this just brilliantly solve the problem. OK? It just immediately works. It's very simple. So in hindsight, it's extremely simple. But actually, Feynman couldn't figure it out himself. OK. [LIGHT CHUCKLE] Right. Anyway, so now let's talk a little bit about this anti-commuting object. OK? So these are called the Grassmann variables or numbers. So this Grassmann object, they just anti-commutes. OK? So they even anti-commute with themself. So if you anti-commute with yourself, then the only thing you can have is a theta square that must be 0. OK? If you have anti-commuting object theta, and you can anti-commute yourself, it will be equal to 0. Because theta-squared should be equal to minus theta-squared. And then, of course, it can only be 0. And so if you have two such objects, theta and eta-- and then equal to minus eta. And of course, eta-squared should also be 0. OK? So this kind of property make this kind of object very simple. So now we can also talk about the functions of such objects. OK? We can also talk about the function of such objects. Say, if we have a function f of theta-- OK? So we define the function in terms of the power series. So suppose that we have an x-- say, if you write down a Gaussian function or whatever function, we define this function by the power series. So power series means that to zeroth order, we have f0 corresponding to theta equal to 0, and then you have f1 theta. OK? But now if you look at f2 theta, theta-squared-- that would be 0. So all higher-order terms can be 0. So you only have two terms. OK? So the only function of this variable will have two, terms, the constant term and the term proportional to theta. And then the functions of such variables are very simple. OK. And if you have two variables, theta 1 and theta 2, then you just expand until you encounter-- you have f0 plus f1 theta 1 plus f2 theta 2 then plus f12 theta 1 theta 2. That's it. OK? That's it. Because for any other term we are involving either theta 1-squared or theta 2-squared, that will be 0. So the differentiation will be extremely simple. So if you're taking d f theta d theta-- so you just-- because this is the constant. And so this is just equal to f1. OK. But the one thing-- you do have to be careful when you have this multiple-variable function. And then it actually matters whether you take the derivative from the left or take the derivative from the right. OK? The direction you take the derivative becomes important. OK? So for example, if we take the derivative-- yeah. Let me do it here. So if you, from the left, take the derivative f theta 1 on this f theta 1, theta 2, and then this term is 0. This term gives us f1. This term does not depend on theta 1. And so this term is also 0 when we take the derivative. And here, we have theta 1. So we just get f12 theta 2. OK? So this is derivative from the left. But now if I take the derivative from the right, theta 1, OK-- so this arrow means you take the derivative from the right. And again, the same thing for this one. It doesn't matter. There's only one theta 1. So you just get f1. Again, these two will be 0. But now if we take the derivative from the right, then you have to move theta 1 to the right of theta 2 first because there's a theta 2 here. And then that gives you a minus sign. So it will give you f12 theta 2. So you differ by a minus sign. OK. So from now on, without specifically mentioning, we always take derivative from the left. OK? So the convention is we always take the derivative of the left. Yes? AUDIENCE: [INAUDIBLE] HONG LIU: Sorry? AUDIENCE: [INAUDIBLE] coefficient [INAUDIBLE] f0 [INAUDIBLE] HONG LIU: What do you mean? Like, what type of object? AUDIENCE: Like, [INAUDIBLE] HONG LIU: Oh, yeah, yeah. They can be complex numbers. They can be real numbers. It's just-- f0, f1 is just some numbers. Yeah. For example-- yeah. So suppose f is a Gaussian function. OK? And then you just write 1-- yeah, yeah. Say exponential theta. 1 plus theta. OK. Is exponential from just one-- just do Taylor expansion of theta. And the f0 and 1 just come from the Taylor expansion of whatever function you're looking at here. OK. Yeah, similarly-- the same thing. 1 plus theta. So you have a function like that. So it's just 1 minus theta. Yeah, it's very simple. OK? So the functions of these variables are very simple. OK. Good. So we talked about the functions. So we talked about the functions. Also, we talked about the-- yeah, the functions of them. We talked about derivatives. And then we need to talk about the integrals of them, OK-- so how to define integration. So now let's talk about how to do integration. So how do you do integrals? Because we need to do the-- to be able to do path integrals, we need to integrate over such objects. OK? So to do integration, then we have to specify the rule for integration because it's no longer intuitive like a derivative. You can just take the derivative. OK? So integration-- we have to specify the rule. And then the idea is you use the standard, the rule for integration, and then translate-- apply to this case. OK? So the rules we are going to require for the integration are the following. So the first rule is that the integration should be linear. OK? So if I have, say, an integration of f theta-- I have two functions, f theta a plus g theta b. And a and b are some numbers. They can either be ordinary C-numbers. They also can be Grassmann numbers, which anti-commute. OK? And then by linear-- but the integration operation is linear. This means that this is the same as the d theta f theta times a, where a just constant so you can just take it outside of the integral. d theta, g theta b. OK? So it should be linear, the operation. And the second rule which we require for such integration is that the total derivative should be 0. OK? So the total derivative should be equal to 0. OK? And turns out that these two conditions can be uniquely then it is enough to actually fix the other properties of the integral. OK? Yeah, it just can be used to fix the integral. Yes? AUDIENCE: [INAUDIBLE] in not all derivatives [INAUDIBLE] when the boundary [INAUDIBLE]? HONG LIU: Yeah. Here, we don't have a boundary. It's not required to be 0. Yeah, so theta-- yeah, we don't know how to specify the boundaries for theta. Yes? AUDIENCE: How does [INAUDIBLE] multiply by it [INAUDIBLE] HONG LIU: Yeah. You say this one? AUDIENCE: No, [INAUDIBLE] symbol. HONG LIU: Yeah. Sorry. Say it again? AUDIENCE: Like, suppose [INAUDIBLE] HONG LIU: Yeah, yeah, yeah, yeah, yeah. Indeed, it does not determine-- yeah, it only determines up to a constant. Yeah. Yeah, but then you can fix that constant. Yeah, I'm going to talk about it immediately. Yeah. Good? So we have these two conditions. So let's see what these two conditions tell us. OK? So from this condition two, we conclude that d theta f2 is equal to 0 because of the-- the derivative of f theta-- or sorry, just f1 equal to 0. OK? We write-- yeah, just f1 equal to 0. And since f1 is a C-number, we can take this f1 outside of the integral. So that means this is equal to 0. So that means that the theta 1 is equal to 0. OK? So just d theta any constant, it should be 0. OK. Just any constant, it should be 0. OK? So this is one condition-- so one thing we deduce. OK? So now let's call this property 3 -- equation three this one follows from the one and two. So now remember from the property of f theta equal to f0 plus f1 theta-- so that means d theta f theta is equal to d theta f0 plus f1 theta. OK? So that means-- so if we use this property one, you can take this constant outside it. OK? And this term is 0. So you just get the second term. You just get f1, d theta and the theta. So now if we just fix the value of this object, then we fix the full integral. OK? And so you just fix this object to be 1. So we just fix-- define it to be 1. OK? Define up to a constant. We just define it be 1. OK. So that's it. So that is fully specified to the rule. And then we conclude that d theta f theta is equal to f1. So now you notice-- yeah, so this is the rule you should remember. So now you notice that this is the same as you take the derivative of theta. OK? So this story is a little bit funny. OK? So the integration of a function is the same as the derivative of the function. OK. OK. That's it. So if the calculus for ordinary variables are such simple of-- [CHUCKLES]---- when we learn calculus, it will be much, much easier. But yeah. Yeah. So these Grassmann variables are very simple. OK. Any questions on this? Yes? AUDIENCE: So what's the point in defining integration in this way if it ends up just being the same as the derivative one? Is it useful to have both integration and derivatives in [INAUDIBLE] if they did the same thing? HONG LIU: Yeah. So this is for the single variable, right? When you go to the multivariable, then things are a little bit more intricate. Yeah. Yeah, but this rule will apply. This integration rule is general. Yes? AUDIENCE: Why dont you define the integral as the opposite of the-- like, the reverse the derivative? Because now, for example, integrating and taking the derivative [INAUDIBLE] HONG LIU: That's right. Yeah, just you don't know how to do that inverse operation anymore. AUDIENCE: OK. HONG LIU: Yeah. Just those kind of objects are sufficiently weird. Yeah, it's not easy. Yeah. So the basic idea, I think-- so you say, why Feynman didn't come up with this? Because if you just say, oh, I have to do anti-commuting, that's not enough. You have to invent the whole calculus for this kind of anti-commuting object. And now the question is that, how do you invent such a calculus so that it's as natural as possible compared to the standard way? Yeah, yeah. And so I think this is a set of rule which seems to work very well. OK. So yeah. Yeah. So when we say that Feynman missed it, it's not that he missed something that's just a trivial idea. He actually missed the whole calculus. OK. So now let's just make a couple of remarks. So suppose eta is another Grassmann variable. OK? It means that it's also anti-commute. So now if we put eta before f theta-- OK? And now, because eta have nothing to do with theta, you should be able to take it outside of the integral. But different variables anti-commute. So when you take eta outside the thing, the rule is that you can just take it out. It commutes through this d theta. You get a minus sign. So you just get the eta d theta f theta. OK? So this becomes just f1. OK. So you can show that this rule, which I'm saying here, is compatible with also this rule. OK? It's also compatible with this rule because you can also commute the eta with f inside of the integral. And then you can use this rule. And you can show these two rules are compatible. OK? So this is the first remark. The second remark is that the-- it's hard to do integration by parts. So now let's-- consider integral like this. a would be some ordinary-- a is just some number. OK? So you can just expand this trivially. So this just gives-- you just find this f1 times a. OK? Just f1 times a. OK. And of course, f1 times a is also the same as a times d theta prime f theta prime. If theta prime is a theta because it's a dummy variable. It doesn't matter. OK? So this tells you-- so then when comparing these two-- OK. So this is like a change of variable. OK? Yes? AUDIENCE: So if eta has to obey anti-commutation with d theta, why doesn't theta have to obey anti-commutation with [INAUDIBLE]? HONG LIU: Yeah. Theta does have two. Yeah. AUDIENCE: But then why [INAUDIBLE]?? HONG LIU: OK. But d theta and theta are two different objects, right? They anti-commute against each other. But this theta means the infinitesimal variation of theta, of course, even though we cannot quantify what this infinitesimal variation means. But these two are different object. Yeah. So their product is not-- yeah. OK. Good. So this tells you-- so normally when we do a change of variable-- so this means that when you do the change of variable, theta prime is equal to a theta. And d theta prime is actually equal to 1 over a d theta, OK, if you compare these two. So now this is a very-- so now, again, this is opposite to the standard story. OK? So d theta prime is actually equal to 1 over a d theta. OK. OK. So let's conclude here. Yeah. So our next lecture is now next Wednesday because Monday is a holiday. Yeah, it's going to be-- unfortunately, I hope you still remember this when we get to next Wednesday. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_13_Introducing_the_Dirac_Equation.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: So at the end of last lecture, so we discussed this LSZ theorem, which tells you how to obtain scattering amplitude from correlation functions, from time-ordered correlation functions. OK, so if you want to compute, say, some scattering amplitude from alpha to beta-- so alpha's some initial state and beta's some final state. Say alpha consists of momentum p1 and PN-- or pm, and beta, say momentum p m plus 1 and pn. And then you can get this scattering amplitude just by taking your momentum-space correlation function, OK, for the n points. So all together, you have an n points for the external momentum, OK, and then you take the on-shell limit. You take the on-shell limit, and then that gives you the product of the external propagators, then times the scattering amplitude. OK. So this is the relation, OK, so here I have stripped out the momentum conservation on both sides. Of course, momentum has to be conserved. Inside, the total momentum has to be zero. And so this limit is the on-shell limit, and in the on-shell limit, the initial momentum-- OK, actually, I should call it minus p. Sorry, it's minus pm. So the initial momentum, for those with the initial momentum, you take p1 to be-- p1 0 to have minus omega p1, OK? For the final momentum-- so p alpha. For the final momentum, you take the p beta 0 goes to omega p beta, the final, OK, with the plus sign. OK, so that's how you distinguish the initial state from the final state, because when you obtain a correlation function, you don't distinguish what is the initial and the final state. You just have some momentum. This is a function of some arbitrary momentum. But the scattering amplitude, of course, those momentum are on shell, and so the way you distinguish the initial momentum and the final momentum is by taking the initial momentum, say to go to the negative root, and the final momentum to take the positive root. But since the alpha, it consists of minus p1, and then the initial state, you have positive energy, OK, you have positive energy. So this tells you that when we compute the scattering amplitude-- when we compute the scattering amplitude, we should take the-- we just take all the Feynman diagrams which you used to calculate these correlation functions, OK, and then you sum over-- you sum over, say, the truncated-- so you see the relation between these correlation functions and the scattering amplitudes. So they differ by this product. They differ by this product of the external propagators, OK, so for each external momentum-- so here there's a propagator, and just as if that-- when you get-- so this scattering amplitude was corresponding to this one with all external propagators stripped. OK, so that's why you consider the truncated diagram not including the external propagators. And you take it on shell. OK, and also, since we are interested only in the process-- which, all particles participate in the scattering process-- so we also only consider the connected diagrams. OK, consider the connected diagrams. OK, so this provides a simplification, so you get fewer diagrams and simpler expression than you would have got from calculating these correlation functions, OK, correlation functions. So any questions on this? Yes? AUDIENCE: So can you explain again why this diagram, like, you have one branch and then there's a loop? HONG LIU: Oh, yeah, yeah. Yeah, I will explain that, but before that do you have other questions? OK. So now I will explain a few things. OK, the first thing is this sign convention, OK, this sign convention. OK, so remember this Gn, so let's go back to the definition of this Gn, these momentum-space correlation functions. So this is obtained by doing a Fourier transform. Say-- I think it would be minus sign. OK, by doing the Fourier transform-- yeah, by doing a Fourier transform of your coordinate-space correlation function, the coordinate-space correlation function can be written as the following, OK, so you have phi x1 and phi xn. OK. So now, for those to go to the initial state, OK, for those to go to initial state, we start from alpha to beta, so then you want those phi corresponding to the initial state to act on the right, OK, to act on the right. And then you do the Fourier transform, OK, and then you do the Fourier transform. So now let's just consider one of them. Let's just say consider you have phi x for the initial state acting on the right, and then you do a Fourier transform. OK. OK, you do a Fourier transform. And so this, if you just record the mode expansion for phi and the phi contains a and a dagger-- and the a pieces acting on the zero will just give you zero, so only a dagger piece will survive. OK, and the a dagger piece is multiplied by-- say you will have, say, some k, and then have i omega k t plus minus. Yeah, essentially-- yeah, let me just write it in a simple way. You just have exponential i k x. OK, I think it's the exponential minus i k x, so you just get the exponential minus i k x. OK, with k it's the on-shell momentum, so k is given by omega k and k. OK. So when phi x acting on the zero, you keep the part which is corresponding to the a dagger, and then you get a piece proportional like this. OK, and now when you do the Fourier transform and then you find just p, then your p just equals to minus k. OK, so that's related to the minus sign there, OK, related to the minus sign there, and also, when your p equal to minus k, then that means p0 is equal to minus k. OK, so that's where that sign in the initial state come from. And the same thing with the final state, so for the final state, then you need to look at the phi x acting on the left to this, and then you do the Fourier transform. OK. You do the Fourier transform, and then in this case, this becomes-- on the left, it's the A part acting on the left, so this gives you k exponential i k x, OK, is the a acting on the left. And then when you do the Fourier transform, then here it gives p equal to k. OK, so that's why in the final state, you just have p equal to k, and then p0 is equal to just omega k. OK, so this explains the sign. So this explains the sign. It's just from whether you act on the initial state or act on the final state. OK. Good? So-- yes? AUDIENCE: Right, so time-ordering of x1 to xn right here? HONG LIU: Yeah. It-- of course, when you derive that, it matters, but here, for this argument, it doesn't matter. It just has to act to the right. Yeah, for the initial state, you have to act to the right. Yeah, of course, to derive that theorem, the time-order matters. OK. Yes? AUDIENCE: So this was just for free scalar field theory, that you showed that this is true? HONG LIU: No, this is not for the free scalar field theory. Yeah, it's the-- here I just detail you the sign convention. OK, you can do this for the full interacting theory. Yeah, just use the free scalar as the-- yeah, because when you go to plus or minus infinity, you can just reduce to the free particles. OK. Other questions? OK, good. So this is the first comment. The second comment, each side, we need to here-- from here, we need to truncate the external propagator. So we mentioned that if you have a diagram like this, OK, if you have a diagram like this, say minus P1, minus P2, to P3, P4, for a diagram like this, OK, and then the square root and amplitude are just given by minus i lambda, because you need to throw away all the external propagators. You don't have to worry about the external propagators. OK, you just need to-- you have to truncate all the external propagators, so that means you also throw away diagrams like this. You can consider arbitrary, complicated diagram, OK, as far as that only happens-- yeah. Yeah, yeah, here, it should-- I don't draw very well, so this just touch at the one point, OK? So all these diagrams, you can ignore them, OK, because they're just corresponding to the correction to the external propagator, OK, because they don't touch with the-- they only-- yeah, you can also-- yeah, you can do it on any of them, OK? So such kind of diagrams, they only change the external propagator, but since we truncate the external propagator, and then they don't matter at all, OK, and it's all included in this diagram. OK. So the reason is the following. The reason is the following. All this diagram do is just modify the properties of the external propagator, OK, modify the properties of the external propagator. And the only way all these diagrams can modify the external propagators is to give you an overall constant, OK, so that's what this Z is corresponding to. So this Z essentially just captures all these different corrections, OK, all these different corrections. And now you have truncated them, and so you don't need to worry about them. OK. So Z also has an expansion, just 1 plus order lambda, et cetera, so the leading order, so Z does not contribute. OK, you can just set it to 1, and when you go to the higher order, then the Z can make a contribution. OK, so you just need to separately take into account the Z, and there's no need to contribute-- to calculate those things separately. Yes? AUDIENCE: We also truncate diagrams with loops in two different legs? HONG LIU: Yeah. Yeah, yeah, any of them, you can do any number of-- as far as they only concern external legs, it's fine, yeah, because this only-- any of those corrections only concerns one leg, yeah. Yes? AUDIENCE: But when calculating the actual, let's say, n-point function, we have-- we're supposed to include all of this? HONG LIU: That's right, yeah. When you calculate the n-point function, you have to include all this, but when you calculate scattering amplitude, you don't need to. AUDIENCE: Don't need to, all right. HONG LIU: Because in the scattering amplitude, you literally divide by the external propagator, and all of these things, they-- yeah, all these things, essentially, they just modify the-- give you the correction to the external propagator and include it in that constant Z, yeah. AUDIENCE: So is Z the same or different for different processes? HONG LIU: Yeah, Z is the same. No, Z is the same for different processes, but it's different for different particles. So here we only have one type of particles, so we only have one Z. But if you have two kinds of particles, then z is different for different particles. AUDIENCE: Oh, OK. So for example, for, let's say, a fixed process, let's say, like, 3 going to 3, Z would be constant for all choices of momentum, right? Am I correct to say that? HONG LIU: Yeah, yeah, yeah, it's all-- yeah, a constant for all of them, yeah. AUDIENCE: OK, yeah, because if it depended on momentum, then it would be kind of useless, right? HONG LIU: No, no, no, it does not depend on momentum. Yeah, this just-- correct. You see, all such things don't change the momentum. OK, the momentum don't change. Yes. Yeah. So we will not go into details of the Z. And that is in the QFT2, and so we will discuss how to calculate this z in QFT2. But the leading order, they don't matter, and so we will start with 1. So for our purpose, actually, it's not important. Yes? AUDIENCE: Is there a physical interpretation for what interactions are included in Z? HONG LIU: Yeah. Yeah, this is the self-interact-- yeah, just when the-- when you have an interacting theory, so the particle can interact with itself, when the particle propagates, it actually can interact with the virtual particle. It just all comes from this kind of diagram. You can have single particle, and you can have such a diagram like this and all these diagrams. Correspondingly, you have a particle propagating, but that particle can interact with the virtual particle, its own virtual particle. OK, and so the rule corresponding to the particle loop, it's-- so this is the real external particle, but anything coming in the loop, you can imagine how the virtual particle which come out from the vacuum, and then this can be interpreted as the particle interacting with its own virtual particle, yes, a virtual particle coming out from the back. Yeah. And so that, this kind of interaction, will affect the property of the propagation but can have the most effect by prefactor, but actually can change the mass, too. But for the-- oh, it can correct for the mass-- and also, that's the subject of the QFT2-- can correct for the mass, I think at most can change the overall factor by z. Other questions? Good. Good. OK. Good. If you don't have other questions, so let's conclude our discussion of chapter 3 on interacting theories. So as I said before, as I mentioned before, and now you are really equipped with the technique, now in principle you can treat any interacting theory. So the technique, even though we just used the scalar theory, but the technique is the same. OK, just for different theory, you have different details, OK, and now you have already equipped the foundation, the basic tools for dealing with any interacting theory. And for the goal of this course, we want in the end to be able to calculate the, say, interactions in quantum electrodynamics, OK, and for that purpose, we still need to have some other preparations. So now let's discuss how to describe fermions. OK, and now we've described the scalar and how the scalars interact, and now we'll talk about fermions. OK. So let me say a little bit of history. So, soon after the quantum mechanics was proposed by Schrodinger and Heisenberg, et cetera, and then people tried to generalize to relativistic situation-- OK, so that came from this Klein-Gordon equation, which we discussed before, which in-- this was the first attempt to write down a wave equation for relativistic particles. OK, and we discussed before, this does not really make sense as a relativistic quantum mechanics. And yeah, yeah, but at the time, this Klein-Gordon equation, if you interpret it as a wave equation, suffers some-- suffers from some difficulties. One is that it does not have a positive probability. You cannot define-- I should say cannot define positive definite probability. And the second difficulty is that it has negative energy state. OK. So yeah, of course, as I mentioned before, that this was-- there's more fundamental reason that actually relativistic quantum mechanics does not make sense, but at the time, in the late 1920s, people didn't realize that. OK, people just looked at those difficulties, they thought it's a technical difficulty. So Dirac proceeded trying to correct those difficulties, to overcome those difficulties. OK, so Dirac then soon came up-- so I think this is around 1926, and then 1928, then Dirac came up with this Dirac theory, this Dirac equation. OK, so Dirac equation was aimed to cure those problems, OK, so Dirac concluded that the Klein-Gordon equation, the reason it had those problems was because the Klein-Gordon equation has a second-order derivative. And then he said, if we have a first-order-- have an equation with first-order-- yeah, he speculated, if we had the equation with first-order-in-time derivative, just like the Schrodinger equation, and then both problems maybe can be solved, OK, can be solved. And then Dirac came up with the Dirac equation. So it turns out that the Dirac equation solved the first problem, OK, but didn't really solve the second problem. OK, but the level is, again, due to more fundamental reasons, you cannot really interpret the Dirac equation as a relativistic quantum mechanics equation, a wave equation for relativistic quantum mechanics. Actually, the Dirac equation should be interpreted as a field theory. So nowadays, we interpret this this is the-- gives the field theory for-- OK, so of course, Dirac didn't know this, so essentially, he discovered this beautiful theory for the wrong motivation, OK, for the wrong motivation. And yeah, this happens over and over again in physics. OK, people make great-- made great discoveries often for the wrong motivations. But the key is that if you are good enough, you will find something new and that something new will be useful. [LAUGHS] OK, and this Dirac theory is a prime example which is actually one of the most beautiful-- we will see, this is one of the most beautiful equations in mathematical physics. Yeah, but also, it's actually describing electrons, so it's not only beautiful, but it's actually useful. OK, so this, first, I'll talk about, introduce the Dirac equation, OK, and its covariance. OK, so the best way to introduce the Dirac equation was still his original motivation, is that we want to find the first-order equation which is Lorentz invariant. OK, so the goal is that you write down the equation, like a Schrodinger equation, which is first-order-in-time derivative, OK, but is Lorentz covariant. OK, but it's Lorentz covariant. OK. So but Lorentz covariant means that this equation has the same form when you go to a different frame. OK, yeah, that's what we mean by-- just when you go to a different Lorentz frame, the equation, the form of the equation looks the same. Just different observers in different laboratory, they see the same equation, OK, so that's what we mean by Lorentz covariant. OK, so but for this to be Lorentz covariant, remember, Lorentz transformation transform t to x, so immediately, you conclude that H must be first-order in spatial derivatives. OK. So the only-- then the most general way you can write it, so let's try to write the most general-- yeah. Yeah, let's try to write something like that. So your first-order derivative, then, has to have the following form, alpha minus i-- yeah, so this is the gradient operator, yeah, spatial derivatives, and yeah, that's i just for convenience. And then this is a vector, have three components, and then have to contract with something, and then we include the alpha. And then you can at most add the constant, OK, so let me just write this-- for historical reasons, write this constant as m times beta. OK. So if you look at this equation form, you say this doesn't make any sense, OK? So alpha and beta, they have to be some kind of constant, OK, but if alpha and beta are constant, then this is not even a rotational invariant, not to mention Lorentz environment, OK, because this derivative is not contracted by any other Lorentz indices. So alpha is just some constant, OK, so this cannot be Lorentz covariant, even cannot be rotational environment, OK, if alpha or beta are constant. OK, and psi is the ordinary function. OK. So yeah, so you can have alpha x, partial x, alpha y, partial y, alpha z, partial z, OK, so you can easily convince yourself, when you rotate xyz, this is not symmetric because alpha is some constant. And so I'm sure this idea came to many people trying to look for some first-order equation which are Lorentz covariant, then after five minutes, you realize this is not possible. OK, this is just so simple, it's just not possible. OK. But then Dirac made it work. OK, it's really, say, a stroke of genius. It's really a stroke of genius because there was nothing like this before. Just even from a mathematical point of view, it's purely, purely imaginative, OK, just nothing like this before. Like when Einstein wrote down his theory, et cetera, you can still trace-- there some clues, OK, but when Dirac-- this one just really-- [LAUGHTER] --like music, just came out from his mind, OK? It just-- and then he reasoned, OK, if this is constant which does not work, and then let's make alpha and beta to be matrices. Take them-- OK, and then, so let's say they are n-by-n matrices. Then in order for that equation to make sense, then psi has to be a n-component vector. OK. So even for some people, you come up with this idea, you will not imagine this will work. OK, you just say, oh, this will be a mess, but somehow, he made it work. OK, so we will see how to make this work. And so now if you want H to be Hermitian, and then you can immediately conclude-- so that's why I put the minus i thing here, is the alpha and the beta. So m will just be some constant, OK, and here is a matrix. You can always take a constant out, OK, so alpha, beta are Hermitian matrices. They're just some constant Hermitian matrices. And then, so then he reasoned that for this equation, if we want this equation to be Lorentz covariant, then at least it should have the relativistic plane wave as its solution. OK, if you have a covariant equation, and at least you should have the relativistic plane wave as its solution. If it does not even have that type of solution, then you, yeah, of course cannot be covariant. OK, so the minimal requirement-- so before we really try to see how can we make this into a covariant equation, we say let's consider minimal requirement. So we'd like to-- here, let me call this equation star. Star should have plane wave solutions with the standard relativistic dispersion relation, OK? Now, these, you should have p squared equal to minus m squared. OK, you have a plane wave. The plane wave will be labeled by p, and then you should have p squared equal to minus m squared. Then the m will be its mass, OK, its mass. And to do this, the simplest way to do this, so we know that the Klein-Gordon equation-- so we know the Klein-Gordon equation have such solutions. OK, Klein-Gordon equation have such solutions. And then the simplest way to do this is-- this is the first-order equation. And now imagine if we square this equation, and then this should reduce to the Klein-Gordon equation. If you can make that work, then this property will be satisfied, OK? So this will be satisfied-- can be satisfied if square of star, OK, satisfies reduced to the Klein-Gordon equation. OK, so now let's try to do this. So when we square this star, we just act twice, so essentially, you have-- so you have-- so when you square that equation, then you just get partial partial t square psi equal to H square psi. OK, and then we try to make this of this Klein-Gordon form. OK, so the right-hand side, we just have the form minus i with alpha dot with this, and then plus beta m square psi. OK, we have these, and then you can just expand this explicitly, so the right-hand side, so then we have minus-- let's square this first-- then you have minus alpha i alpha j. OK. And then you have partial partial xi partial partial xj psi, and then let's square this. Then you have beta square m square psi, and then you have cross term. So cross term now has the form beta alpha i plus alpha i beta. But now remember, beta and alpha, they are not constant. They are matrices, OK, so they don't necessarily commute. So you have to be careful about the orders. OK, and m partial psi partial xi. OK, so the right-hand side is just like this when you square it. And now we wanted to look at the Klein-Gordon equation. The Klein-Gordon equation has the following form. So now we want it to be-- OK, hopefully to be given by minus partial x square psi plus m square psi. OK, so this one would be the Klein-Gordon equation, OK, which we wrote before. OK, so by x-- yeah, here, sorry, I should say xi square. Yeah. OK? So i should be summed, OK? So now we want this equation, so we want the right-hand side to be equal to this equation. If that works, then we're guaranteed to have a plane-wave solution of such a dispersion relation because this equation has-- OK, because this equation has the plane-wave equation with-- yeah, plane-wave solution with that kind of dispersion relation. So now we just compare the both sides, so for this to be true, so we just need to have-- so let's compare the second-order derivative term. And then we need alpha i, so we need-- first, when i not equal to j, the off-diagonal terms, they all should vanish, OK, because here there's only diagonal terms. So that means that the alpha i alpha j plus alpha j alpha i should equal to 0. OK, remember, the matrices, they don't commute, so we should be careful about the ordering. OK, so when i not equal to j and when i equal to j, then you should just reduce to here. OK, and that means that the alpha i square should equal to-- and here you should just-- here you should just-- this term just will give that, and the-- should equal to beta square equal to 1. OK, so here, there's no summation, OK, no summation over i. OK, so each alpha i square has to be 0, and now we want this linear term to be 0. And then you want beta alpha i plus alpha i beta to be 0, OK? And also, just let me put it together. We said the alpha and the beta, they must be Hermitian, so that means that the alpha i dagger is equal to alpha i and the beta dagger equal to beta. So if we satisfy all these four conditions, and then we will guarantee that that equation star should have the plane-wave solution. OK, that should have the plane-wave solution. And so now you just try to find matrices, satisfy those conditions. OK. So we already said that the 1-by-1 matrix don't work, that if they are constant, of course, they don't work. And so you can also check 2-by-2 matrices. It's not enough to do this. 3-by-3 is not enough. When you go to 4-by-4, then you finally find the solutions. OK, you actually find the infinite number of solutions. So you see that to satisfy them needs at least a 4-by-4 matrix, so n has to be 4. OK, so this n has to be 4. And so let me give you some possible solutions. So for example, here is one solution. Say you take beta to be 1, so all matrices here should be understood as 2-by-2 matrices, OK, so all together, it's 4-by-4 matrix, so 1 and minus 2. So beta's given by this, so this is one solution, and alpha i given by 0, sigma i, sigma i, 0. And sigma are Pauli matrices, OK, sigma i, Pauli matrices. OK, so this is one solution you can check yourself. OK, I will not do it here. And this actually is a solution that satisfies all these four conditions, and here is another solution. So now let me just save time in not writing down this two-by-two. So for example, beta can be 0, 1, 1, 0. OK, this one here is a two-by-two matrix, OK, and alpha i equal to sigma i, 0, 0, minus sigma i. OK. So you can check both of them satisfy those conditions. Yes? AUDIENCE: Sorry, so when you have alpha times grad psi, is it-- do you act the grad on each element of psi and then multiply by alpha? Or do you-- like, what's the order of operations? HONG LIU: Oh, it doesn't matter because alpha is just some constant, right? Alpha and the grad, they don't act on the same space. So that derivative just acts on derivatives, and alpha acts on a different component of psi. AUDIENCE: I see, but if you took the grad of psi and then multiplied by alpha, then you'd mix up the components of it. HONG LIU: Yeah. Yeah, it's fine. Yeah. Yeah, you can do it either before-- yeah, they commute, so the operation of the alpha and the operation of grad, they commute. Yeah. Yeah? AUDIENCE: Yeah. So you'd have a-- so alpha x because alpha x would just be the Pauli x matrices times the grad x, and then you'd need another one for y and for z? HONG LIU: Yeah. Yeah. Yeah. So yeah, so I urge you-- so here I don't have time to write everything very explicitly. So here you just write it as alpha x partial x plus alpha y partial y plus alpha z partial z, and each alpha is a matrix. And alpha acts on different components of psi, and this just acts on-- the derivative acts on all components of psi. Yeah. OK? Good? So now, good, you say we have an equation, OK, so far, so psi-- so alpha is a 4-by-4 matrix. Yeah, alpha and beta are 4-by-4 matrices, so that means that psi should be a four-component vector. OK, it's a four-component vector, OK, so we will take-- so we will denote it as psi alpha. So alpha equal to 1, 2, 3, 4, OK, and we-- for the moment, let's just take the most general situation. We'll take alpha to be complex, OK, each of them to be complex. OK. Yes? AUDIENCE: Is alpha, like, a vector of matrices? HONG LIU: Yeah. AUDIENCE: Which element of alpha is a matrix? HONG LIU: What do you mean? AUDIENCE: So, like, alpha i is a matrix. HONG LIU: Yeah, alpha 1, alpha 2, alpha 3, they are all-- they are three matrices. AUDIENCE: And you said we'd need the first on the list. Wouldn't we want each alpha i to be diagonal? Is that what you're saying, that each matrix in alpha should be a diagonal matrix? HONG LIU: No. No, no, no, no, no, no, no. No, alpha, no, we don't know the form of the alpha, right? No, alpha are just matrices, so this means that alpha 1 alpha 2 plus alpha 2 alpha 1, as a matrix product, should give you zero. Yeah. Yeah, alpha itself is a matrix. AUDIENCE: Of matrices. HONG LIU: Hm? AUDIENCE: Of matrices. HONG LIU: Sorry? AUDIENCE: Each alpha i-- each component of alpha is a matrix. Right? HONG LIU: Yeah. Yeah, yeah, yeah. Yeah, just, alpha have three components. AUDIENCE: Right. HONG LIU: And each component is a matrix. Each component is a matrix. Just, if you have alpha 1, alpha 2, alpha 3, so here, here you see explicitly, alpha 1 is equal to sigma 1, sigma 1, alpha 2 is sigma 2, sigma 2, et cetera. Other questions? OK, yeah, at the beginning, this may a little bit more-- not very intuitive, OK, but if you just work through it, then you will get a feeling about it, OK, you will get a feeling about it. So that's why I say this was really genius, because just nobody could have thought of this. OK, it just came from nowhere. Really, there was no clue, OK? There was no clue of such a structure. Yeah. OK, so this is a new object, so we call it spinor. OK, we call it spinor because it-- later we will see that this describes spin-half particles, so that's why we call it spinor. OK. Good. AUDIENCE: Dr. Liu? HONG LIU: Yeah? AUDIENCE: So if we take n to be larger, then we describe all the spins? HONG LIU: Hm? AUDIENCE: If we take a n by n matrix where n is not so small, then we can use that --? HONG LIU: No, you get-- just we are not using those matrices in the efficient way. Yeah. Yeah, this become-- you can reduce always-- yeah, just from physical purpose, you can always reduce it to 4, yeah. Yes? AUDIENCE: Well, what if I wanted a wave equation for higher spin? HONG LIU: For higher spin? AUDIENCE: Yeah, like 3/2 or 5/2. HONG LIU: Yeah. Yeah, if you know how to do the two halves, then you can generalize. Yeah. Yeah, so one half, essentially, you can-- yeah, based on one half, you can generalize it. Yeah. AUDIENCE: Thank you. HONG LIU: Yes? AUDIENCE: Just to clarify, so alpha i, is it a 2-by-2 matrix of 2-by-2 matrices? Or is it just a 4-by-4 matrix and that's just a convenient way to write it? HONG LIU: Yeah, this is a 4-by-4 matrix. It's just a convenient way to write these 4-by-4 matrices. AUDIENCE: Sure. OK. HONG LIU: So that I don't have to write all four components. I just-- yeah. AUDIENCE: Yeah, yeah, yeah. OK. HONG LIU: So I divided these 4-by-4 matrices into four 2-by-2 blocks, and then I specify each block. AUDIENCE: Yeah, it's just blocks, not matrices in the matrix. HONG LIU: Yeah, yeah. No, no, just the blocks of that 4-by-4 matrix, yeah, separate the single 4-by-4 matrix into four 2-by-2 blocks. AUDIENCE: Yeah. HONG LIU: OK? Good? OK, so for later convenience, let's introduce a slightly different notation. OK, so now we have this equation, so now we have the form of this. So now we have the form of this partial t psi equal to minus alpha plus beta m psi, so now let's multiply both sides by a factor-- by beta. OK, so beta is a matrix, OK, so this is a matrix equation. OK, so this is a matrix equation. Let's multiply both sides by beta, OK, and then we get i beta partial t psi equal to minus i beta times alpha. Yeah, and let me just write it maybe this way, so alpha i and partial xi psi, OK, then plus m psi. OK, so you'd get an equation like this. OK. So now I will denote-- introduce a new notation so that it looks nicer. I'll denote the gamma 0 equal to i beta and then the gamma i equal to i times beta alpha i. OK, and then let's all pull it to the same side, and then this becomes the following equation, then the equation has the following form, gamma mu partial mu minus m psi equal to 0. OK, so this is the form of our Dirac equation. OK, so this becomes gamma 0 times partial t, so this becomes gamma i times partial xi. When they come together, then become gamma mu partial mu, and then m will move to the other side. OK. So due to different conventions of the Minkowski metric signature you use, et cetera, so different books or different places, you will see i in different places. Some equations have i-- yeah, some books have i here. Some books have i here. In my version, there's no i, OK? [LAUGHTER] And so I know this is annoying, but yeah, but this is just a fact of life. Yeah. Yeah, so I find that this version is most simplest in terms of notation anyway, so that's the convention we use. Good. Good. So now, again, this is a matrix equation. Now let me just write it in the component form, OK, so this is in the component form. I have gamma mu, which-- each of them is a matrix, alpha, beta, so alpha and beta, they're always run. Yeah, sorry for this. Now the alpha, beta just means the-- yeah, means the indices, OK, not the matrices. So alpha and the beta-- actually, convention is to write this like that. OK, it doesn't matter. So partial mu psi beta minus m psi alpha equal to 0, OK, so this is a matrix equation. There are all together four equations, so the beta is summed because beta is repeated. So beta is summed, and then, yeah, and the mu is also summed. So this is a little bit intricate equation. OK, so this is a little bit intricate equation, but once you get used to it, it's not that difficult. Yes? AUDIENCE: There's no meaning to the upstairsness or downstairsness of alpha and beta, right? HONG LIU: There's a meaning of upstairs, yeah, because these two index are not symmetric, so it's easier to put one upstairs, one downstairs. Yeah, these two indices are not symmetric. Yeah. OK. So yeah, it takes a little bit time to get used to it, OK, and I know some people develop psychological fears for fermions because you have to deal with those gamma matrices, OK? For a long while, actually, I have this psychological fear myself. [LAUGHTER] When I look at fermions, I want to be away from it because I don't want to deal with those gamma matrices, but these are beautiful objects if you get used to them. OK, so now those conditions, you can also write them in the compact way in terms of the gamma-- in terms of gamma matrices. OK, so 1, 2, 3 now can be written as-- in terms of gamma matrices gamma mu gamma nu plus gamma nu gamma mu. So mu nu is always from 0 to 3, OK, equal to 0, or mu not equal to nu, and then gamma 0 squared is equal to minus 1. It's because this gamma 0 is i beta, and the beta squared is equal to 1. And then the gamma i square, so there's no summation over i here, OK, because each matrix gamma i square equal to 1. OK. And you can write this further in a more compact form. You can write this further in a more compact form. So you can write this more compactly as gamma mu gamma nu anticommutator equal to 2 eta mu nu. So anti-commutator just means that if you have objects, two objects, curly bracket means a b plus b a, OK? So this is the key equation. OK, so all the gamma matrices, so you can check yourself, OK, all these are given by just this. You can easily see, when mu not equal to nu, of course, the right-hand side is 0, so that just is equal to that equation. When mu equal to nu, when there is-- when they're both 0, then this gives you minus 1. That's corresponding to the gamma 0 case, and then when they're ij, then corresponding to gamma x. OK. And so this equation, when the later mathematicians, of course, studied this, a mathematician would say this is a beautiful object. And then they studied this, so now it's called Clifford algebra. So this object is called Clifford algebra. OK. And so then any sets of gamma mu-- so gamma mu are a set of four matrices satisfying this equation-- OK, so let me call this star-- is called a representation of the algebra, of the Clifford algebra. OK. OK. So from these two solutions of alpha and beta, we can easily-- from here, we can work out what is the gamma nu and the gamma i, so yeah, so here are two examples. Also, yeah, before-- talking about examples, also, from number 4, you also find that the gamma 0 dagger equal to minus gamma 0, and gamma-- so gamma 0 is anti-Hermitian and the gamma i is Hermitian. OK. And you can also write this together in the more-- or equivalently write it as gamma mu dagger gamma 0 gamma mu gamma 0. OK, so you can check that this equation is the same as these two equations. OK. Yes? AUDIENCE: Do these representations generate a certain transformation? HONG LIU: Yeah, yeah. I will talk about things related to this a little bit later. Yeah. Good? And then we can write down explicit solutions for those gammas, so that's two representations. From those solutions of beta and alpha, we can write down different solutions of gamma. So for example, for 1, you're corresponding to gamma 0 equal to minus i. So now this is minus i times a 2-by-2 matrix, OK, or i, 0, 0, minus i, OK, and the gamma i is equal to 0, minus sigma i, minus i sigma i, 0. And the second solution there corresponding to gamma 0 equal to 0, i, i, 0, and then gamma i equal to 0-- I think it's also minus i sigma i, i sigma i, 0. OK. Good, so these are just-- again, these are just good-- so these are two different representations of this algebra. OK. So now let me make some remarks. So before I proceed further, do you have questions? Yes? AUDIENCE: For 2, shouldn't the Pauli matrices be-- oh, nevermind. HONG LIU: Oh. Other questions? Yes? AUDIENCE: So I guess that I don't understand what space this side vector, the four entries-- like, what is that? Is this a Lorentz four-vector? HONG LIU: No, it's not a Lorentz four-vector. AUDIENCE: So in what sense is it a vector? How does it transform? HONG LIU: Yeah, so this is a new space. We will talk about that. Yeah. Yeah, so this is a new space, and so that's called-- this is called spinor space, yeah. Yes? AUDIENCE: How do we know that-- are these the only two representations? HONG LIU: Oh, no, no, no. There are infinite number of them. Yeah, I mentioned there are infinite number of such solutions, and this is just two of them. Yeah, I will comment on all those different solutions. Other questions? OK? Good? So now let me make some remarks. OK, so first, if you consider the case m equal to 0, OK, if you can see the case m equal to 0, and then this is, like, for massless, so when m equal to zero, then when you reduce to this Klein-Gordon equation, what you get is the massless equation. OK. OK, it's a massless equation, and then you will have dispersion relation p squared equal to 0. OK, so in this case, the original equation just becomes partial t equal to minus i alpha i partial partial xi psi. OK, so that's your equation. There's no this m beta term. There's no m beta term. So in this case, you just-- the same story just here, you just forget about beta. OK, the same story, you forget the beta, and then you only need alpha i alpha j the commutator to be 0 for i not equal to j, essentially just that equation 1 there. And then also, you'd want the alpha i square equal to 1, OK, so for any i. OK, so now these are the conditions for the alphas, and now you can actually satisfy by 2-by-2 matrices. So what matrix satisfies this kind of relation? AUDIENCE: Pauli. HONG LIU: Yeah, so the Pauli matrices anticommute among themselves, and their square is equal to 1. So you can just take off i in order to simplify here. So this tells you something important, tells you actually the equation for a massless particle and a massive particle are very different. So massive particle, you actually need the four components, but for the massive-- but for the massless particle, you can describe by a sigma matrix here, only 2 by 2. OK, so that means for massless psi, it can be described using a two-component vector, OK, so that in this case, psi equal to psi 1 and psi 2. Again, they are all just complex. OK. Good? So this actually-- yeah, we will elaborate on this later. So this actually tells you that actually the massive particle have more degrees of freedom than the massless particle, OK, more degrees than the massless particle. Good? And then the second thing is what Dirac essentially was-- Dirac's original motivation. So from Dirac equation, you can show you can derive a current. Just as you can do for the Schrodinger equation, you can derive an equation like this for some j mu and with j 0, with the 0-th component of this thing, positive definite classically. So I emphasize that this is classically. You will-- later, you will see why, OK? And this, I will leave it to your pset, OK, so this is very similar to the derivation of such occurrence in the case of the just non-relativistic Schrodinger equation because this has the same structure. Yeah, the Dirac equation, when we start it, has the structure of the non-relativistic Schrodinger equation, and yeah, so you can show something like this exists. OK. And then the third point is related to the question many of you may have. So we said, what's the meaning of all these different solutions for alpha and beta or for gammas, OK? So as I mentioned, you can have infinite number of solutions. What's the meaning of them? So first, let's imagine when we look at this equation-- so as I said, this is a matrix equation, so in this matrix equation, then you have this psi which is a four-component vector, OK, some four-component vector. So now let's imagine we make a basis change in this four-component vector, OK, so imagine, just say, consider making a basis change in psi. OK, so a basis change in psi just means, in linear algebra, this is a vector, means we consider another psi prime, which is-- so we take psi to some other psi prime which is related to psi by invertible matrix, so B, just some constant, complex, invertible matrix. OK, so essentially, you just make a linear superposition of different components. OK, so yeah, so this corresponding to a basis change. So now if psi satisfies this equation, psi satisfies this equation, you can easily convince yourself that the psi prime satisfies the following equation, the gamma mu prime partial mu minus m psi prime equal to 0, and the gamma mu prime is equal to B gamma mu B minus 1. OK. So yeah, so this easily can be shown. You just multiply B from both-- just multiply B to this equation, OK, multiply B to this equation, and for this term, you can just directly give-- psi gives you psi prime. And for this term, then you just get the B, and then here you can insert B minus 1, B, and then, yeah, you get that. OK, so this, you can easily convince yourself, just a couple of lines. So now, so psi prime satisfies essentially the same equation but a different gamma matrix, OK, a gamma mu prime, so now you can easily check yourself, OK? So you can easily check yourself this gamma mu prime also satisfies that algebra. OK, so you can easily check yourself. So then we conclude, any sets, since we can make a basis transformation as you want, OK, that the-- except the basis transformation should not change physics, OK? So any sets of gamma mu related by a similarity transformation are equivalent. OK. OK, they're equivalent, and because they should not give you new physics just because one is going to change the basis. OK. So now, here is a highly nontrivial mathematical statement, so now-- which of course I will not prove here because it'd just take too much-- yeah. So you can show-- I'll just quote the result. So you can show, OK, with a little bit of effort that under such kind of equivalence relation, means that the similarity transformations, they are all equivalent, and such equivalence relation, the representations of star is unique. OK. So you can show any matrices which satisfy that equation, they're all related by similarity transformation, OK, so they're all physically equivalent. They're just corresponding to a change of basis. OK, they're corresponding to a change of basis. OK. So but different forms of the gamma, they may-- different forms of the gamma matrices, they may be useful for different purposes. OK, they may be useful for different purposes. For example, this I, solution I we wrote down before, it's convenient for if you want to take the nonrelativistic limit. For example, if you want to make a connection with a nonrelativistic quantum mechanics, and that's actually the most convenient form, the matrix to use. And II is actually in the-- opposite regime, it's convenient for the ultrarelativistic regime. OK, so it depends on which regime, sometimes you use different gamma matrices. OK, they make your algebra a bit more convenient. So now having introduced the Dirac equation and then the structure of the Dirac equation, but still we haven't showed that the Dirac equation is covariant. OK, we just showed that the Dirac equation can have the plane-wave solution and that the plane-wave solution will have a standard of nonrelativistic dispersion relation. OK, so in order to show that the Dirac equation is covariant, we have to show-- we have to make a Lorentz transform and show that the Dirac equations are the same in every Lorentz frame. OK, we have to show that, OK, and we are running out of time. So we, of course, won't have time to do that today, but let me just remind you how this Lorentz covariance works for the scalar case. OK, so remember, so recall, yeah, let's just quickly recall, for scalar, we have phi x. Then under Lorentz transformation-- so consider Lorentz transformation which x mu goes to x prime mu equal to lambda nu mu x nu. Consider such a Lorentz transformation, OK, so and then phi transforms as following, phi prime x prime phi prime. New phi evaluated at the new position should be the same as phi evaluated at the old position, OK? Or let me just write it here so that I don't have to erase. So the phi-- yeah, just equal to phi lambda minus 1 x. OK. So now if you look at the Klein-Gordon equation, let's see how this is covariant, OK, so now let's see in a different frame. OK. So covariance means that when we go to a new frame, partial prime square-- OK, so this means in the prime coordinates-- minus m square and phi prime evaluated at the x prime, there must be a-- yeah, so this is in another Lorentz frame, OK, so they have the same form. Indeed, you see these two are equivalent because this just equal to that. OK, trivially, you could do that just by definitions. And this is a Lorentz scalar, so this is also equal to that. OK, so you see that the Klein-Gordon equation indeed is Lorentz covariant. OK, it's the same in any frame. So now we want to show, OK, so now we want to show that the Dirac equation has the same property, OK, and that is much more nontrivial. OK, that's much more nontrivial. Again, it's really ingenious, ingenious, yeah, but we see, actually, it works. OK, so we will do it next time. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_1_Classical_Field_Theories_and_Principle_of_Locality.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Yeah, so I'm Hong, so you can just call me Hong. I'm a theoretical physicist. I work on high energy theory, including string theory, quantum gravity, non equilibrium statistical physics, et cetera, and many different topics. So now let me say a few words regarding the QFT, the quantum field theory itself. So the goal of this quantum field theory class is to develop concepts, formalism, and techniques for quantum dynamics of fields. Say, I'm sure all of you have studied Maxwell's equations, OK? So Maxwell's equations describe fully classical dynamics of electric and magnetic fields. But we live in the quantum world, and so we should also treat electric and magnetic fields using quantum language, OK? And then you find that once you do that, and then the concept of photon, once you treat the electric field and the magnetic field quantum mechanically, and then you get the concept of the photon. And the photon now is the fundamental particle, which mediates electromagnetic interactions, OK? So you actually get a completely different physical picture from what you get classically. And yeah, so one goal of the class is for you to appreciate, say, to understand the quantum electrodynamics, OK? And that's actually where we will end this class, is we will discuss the quantum electrodynamics. And the quantum field theory is also very important for other interactions in nature. So among the four fundamental interactions in nature, three of them are described by quantum field theory completely, OK? And you can also use quantum field theory to describe gravity, to describe quantum gravity. But at the moment, quantum field theory does not offer a complete description of quantum gravity. But still, you can use the quantum field theory techniques for understanding certain questions of quantum gravity. And also, quantum field theory over the years, even though it's initially developed for particle physics, but over the years has been also found many applications in many, many other branches of physics, in condensed matter, statistical physics, et cetera. And it's fair to say, nowadays, quantum field theory has become a universal language, OK, for theoretical physics, essentially many in all different fields. And so if you are serious, certainly you need to master this language. And even if you are only interested, say, in atomic physics or statistical physics, the field theory concept will be very important. And for experimentalists to know basic concept of quantum field theory and to have some basic understanding of it should also go a long way to help you to appreciate, say, the most recent theoretical developments and also for help you to communicate with theorists, and yeah. And so this class is the first of a three-semester sequence. And so this semester will mostly develop the fundamental concepts, and the quantum field theory 2 and 3 will be more about technical development. And say, if your experimentalist, and then you can view this quantum field theory 1 as a stand alone class, which is just enough for you to get the basic idea of quantum theory of fields, which you don't-- yeah, you may not-- it depends on your need and whether you have 2 or 3 or not. Yeah, may depend on your needs. OK, so under the main topics, I plan to cover are listed in the outline, this document of the outline, which is already on the website. And so I should emphasize that outline is rough, is only a rough roadmap and may change. It depends on the pace. I may change things along the way. And sometimes I change my mind, say, half through the course, somehow I feel-- yeah, anyway. So don't treat it too literally. Any questions about this subject of quantum field theory so far? OK, good. And also, let me say a few words that the quantum field theory has a reputation of being a very difficult subject, OK? Actually, indeed, myself have suffered a lot when I learned it myself, OK? But with 20/20 hindsight and also from interacting with large number of students through teaching various level of quantum field theory classes in many years, I can assure you now that actually quantum field theory is actually not difficult at all, OK-- [LAUGHTER] --if you learned it the right way. And of course, the learning thing-- yeah, of course, anything is not difficult if you learned it in the right way. And so in a sense this is empty words, but keep it in mind. Whenever you think it's too hard and there might be-- the reason might not be-- the reason might be you have to change your perspective, OK? You have to change your perspective. And so quantum field theory, one thing people complain about is that quantum field theory often involve a lot of calculations, and that's true. That's just a fact of life. You cannot avoid it. But that's not what it makes it difficult, OK? Complicated calculations, you can just go through them from one line to the next line to the next line. If you're careful enough, patient enough, you can go through. So the difficulty, I think for most people, of quantum field theory, it's more at a conceptual level. It's because this subject is not a very intuitive subject. It's not something you can just understand just by thinking, OK? So that's why I emphasized earlier the exercise and really working through it is very important. It's a little bit like quantum mechanics. In quantum mechanics your intuition was developed through examples. By working through many examples, you slowly develop intuition about the quantum mechanics. And if you learn all the lessons, and then you get good feeling about quantum mechanics. And quantum field theory is the same thing. It said, you have to-- some kind of intuition has to be developed, OK? It's not something you can easily-- just like mechanics, which you can in some other subject, maybe if you have very good intuition, you can just imagine it, OK? Yeah, so to help you develop good intuition about quantum field theory, I can offer you three pieces of advice, OK? So first, yeah, so the first piece of advice is that the quantum field theory is essentially quantum mechanics but dealing with infinite number of degrees of freedom, OK? So in your quantum mechanics class, you always treat with finite number of degrees of freedom. But quantum field theory, the difference for quantum field theory is now you treat an infinite number of degrees of freedom. It turns out that this treating infinite number degrees freedom makes a difference. So more is different and actually, sometimes make conceptual differences. And so that's why sometimes the quantum field theory is unintuitive, OK? But that said, I found, for many people, including myself when I learned it, for many difficulties you encounter in learning quantum field theory, it's not due to the difficulty in quantum field theory itself. It's actually due to your gap in understanding of quantum mechanics. So whenever you encounter something you don't quite understand in quantum field theory, try to step it back, to say, can I formulate this difficulty in terms of quantum mechanics with only a finite number of degrees of freedom? And often, you find actually your difficulty can already be formulated in quantum mechanics. And then that way then you should be able to just settle it yourself, OK, because we are supposed already to be a master of quantum mechanics, OK? And certainly, when I learned quantum field theory I was stuck at a certain point for a long time. And then later I realized, just because I didn't understand certain Heisenberg picture of quantum mechanics very well, OK? Somehow I realized when I understood the Heisenberg picture of quantum mechanics well, and those difficulties just went away. And have nothing to do with quantum field theory itself. So that's why, in your first Pset you will get familiar with Heisenberg picture of quantum mechanics, OK? That came from my own experience. And also, the second point is that quantum field theory deals with formalisms. And sometimes the subject seems very formal, OK? You have a lot of formalisms, OK? But just keep in mind, any formalism in physics, no matter how abstract it is, it was always designed to solve some concrete physical problems and physical questions, very concrete physical questions. And if you understand what kind of concrete physical questions quantum field theory was designed to solve, then that can give you a very good perspective on those formalism and why people do this why people do that, why people do this trick, why people do that trick because they were invented to solve certain concrete problems, OK? And once you understand the questions, understand the problems, then the formalism becomes much easier to understand. And the third thing we already said is that in quantum field theory, as in quantum mechanics, intuition was built through experiences, OK, through examples. So when you do your Pset, when you look at the examples in the class, you should always ask yourself afterwards, say, after you have done your Pset problems, always look back at that problem. Say, what did I learn from this problem, OK? And just think through it again. Think through what you learned from that problem again. And that is a very good way to help you to learn from your experiences and to help you develop intuitions, OK? And so yeah, so also a very important thing you should keep in mind is that in the graduate course like this most of the things should be learned outside the class. So inside the class the purpose is to give you a guide, OK, is to emphasize the conceptual picture and the physical intuition, et cetera. And so sometimes I will leave some details for you to finish in the Pset. And sometimes the Pset will involve problems which I did not, say, fully discuss in lecture, but I want you to work out yourself, OK? And so P set is important part of the learning, even new things, OK, not just to practice something, but it's also a very important part for learning new things. Right. Good? Also, finally, I would like to make an apology. So you will soon find that notations rotations are used in the lecture are different from the notations in the recommended reading books, OK? So I recommended reading Peskin and Weinberg. And you will find that my notations are actually different from them. Also, the order of presentations are also different from them, OK? I know this is very annoying, but there's just no perfect textbooks. And there's no perfect set of notations everybody use. And we all use the notations which we find the most convenient to use, OK? And so even though I realize this problem, but I don't have a good resolution, OK? So just keep in mind, the notations in my lecture can be different from the notations in those textbooks, OK? Good? So do you have any other questions? Good, OK, so if you don't have any other questions, so let's start. So the Chapter 1 will be about why we consider quantum field theory, OK? So first, we talk a little bit about the classical field theories to set the stage or quantum field theories, OK? And the first important concept is called the principle of locality. So if you remember from your high school days Newtonian mechanics, so in Newtonian mechanics you have action at a distance, OK? For example, if you look at gravity and the gravity is exerted by the sun on the Earth, but they are very far away, OK, and yeah. And the same thing with the Coulomb interactions between the charged particles. But then in the 19th century, they came from this principle locality. And that's formulated by Faraday around 1830, OK? So the principle of locality said all points-- you actually don't have action at a distance. He said all points in space participate in the physical process, OK? And the effect, so if you have interactions, OK, so effect propagates from points to a neighboring point, OK? OK, so in this principle locality you don't have action at a distance. So action at a distance is always conveyed-- the action is always conveyed from one point to another point through the propagating in the space, OK? And the fields, the concept of fields is the mathematical device or vehicle that the principle of locality is at work, OK? So this is essentially the device we need to use to realize this principle of locality. And so the main idea of the field is that we associate each point with each point in space a dynamical variable, OK, or dynamical variables, OK? So for example, so for example, so if you have electric field-- so the electric field is defined for all space, OK? So at each point x, we can introduce an electric field, OK? And then this electric field can also depend on time, and yeah. So of course, normally we write it this way. E of x, t. And the reason I write it this way is to emphasize that in the definition of the electric field, the space and time actually play a very different role. The space plays the role of a label. So at each point we have electric field, OK? At each point x, we have electric field. And so x here is just a label, OK? And the t is used to describe the evolution, the change of the electric field. So the x and t plays a very different role. And similarly, you can do it for magnetic fields. It's the same thing, OK? And here you should always view x as labels, OK? So the spatial point, which we denote as vector x, is the labels. And we know that the evolution of electric and magnetic fields are described by Maxwell's equations. So let me just write them down to remind you. So if you have dot v 0 dot E equal to 4 pi rho. So this so-called the differential form of the Maxwell equations. So the reason I'm taking trouble to write them down is to emphasize the following point. So this set of equations exemplifies perfectly the principle of locality. It's because you see those equations only involve the value of electric fields and the magnetic field at a single point, OK? So if here is at x point, so here is also at x point. You never say x here and here is some other point y. And so this is reflected the principle of locality that everything-- so the effect is propagate from point to point, OK? You just have the derivative of the same point, OK? You don't involve separated points, OK? So these are local equations. So these are what we call the local equations. And they contain only E and B at the same point and also the charge density and current density at the same point, with finite number of derivatives. So the derivatives are the ones to help you to propagate, OK, because it relates the point to the neighboring point, OK? And so that's how it propagates. It's through the derivatives, OK? So the derivatives are key, OK? The derivatives are key. And another example with examples of locality, which I will not go into here, which some of you may know, is the Einstein gravity, so Einstein's general relativity. So in Einstein gravity the dynamical variables are so-called spacetime metric. So they are object with two indices, two spacetime indices, OK? And then the Einstein equations are equations for this kind of object. And again, the equations are local equations in the sense that they only depend on the G evaluated at the same point with a finite number of derivatives, OK? So in Einstein's gravity you no longer have action at a distance, OK? So the effect of the gravity is propagated through the space time, through space time. So in fact, so here we say these two examples exemplifies the principle of locality. In fact, the principle of locality plays a very important role in formulating those equations because, as we will very soon see, when you have principle of locality, you can significantly constrain the theory you can write down. So that's a very, very powerful principle. Yes. AUDIENCE: Are there popular examples you think we might have seen of nonlocal equations? PROFESSOR: Yeah, you can have nonlocal equations, but it's believed that the fundamental equations in nature, they are all local, yeah. Yeah, and so far, the equations govern all fundamental interactions. All four different interactions in nature, the equations are local. Other questions? Yes. AUDIENCE: When you wrote down the Einstein's gravity thing, why did you pick out a particular time component t? PROFESSOR: Sorry? AUDIENCE: When you wrote down the G mu nu as a function of x and t, why did you pick at a particular time component? PROFESSOR: Say it again. I don't quite understand the question, yeah. AUDIENCE: Sure. You kind of treated the x and t asymmetrically when you write down the equation for Einstein's gravity. PROFESSOR: Yeah. AUDIENCE: Is there a particular reason why you wrote it in that way? PROFESSOR: Oh, no, no, no, here, I just want to emphasize again that, of course, in Einstein's theory these two are treated the same way. But if you think in terms of the fields, they actually have very different physical interpretation. So that's why I write this way. Yeah, it's the same way I write it like here, yeah, yeah, yeah, same reason, just here to emphasize for this-- to emphasize the different role played by x and t. Yeah, but I think you're asking a very good question. So I'm going to mention later in the relativistic series then, of course, then these two become the same, play the very equal role. Does that answer your question? AUDIENCE: Yeah. PROFESSOR: OK, good. Good, also yeah, so let me also very quickly mention the different types of fields. OK, so you can have what's called a scalar field, scalar. So these are quantities, which at a given point there's only one value, OK, say, for example, the temperature. So it's just a single quantity defined at the point, for example, the temperature, OK, and et cetera, and maybe other quantities. And then you also have a vector field. And E and B, they're examples of a vector field because at each point you have a vector, OK? So here I'm using the three-dimensional notations. But in the relativistic series, You. Can also use the four vector notations. Say, for example, the vector potential will have the following form. Say, A mu will be a four-vector, and then the space time I combine them together into a four vector, OK? And so at each point, and then you have a four vector, OK? And so yeah. And also you have tensor field. So this metric is an example of a tensor field. Here you have two components you have two indices, OK? So you have many, many different components depending on the-- so again, this is a relativistic notation. I can write it as a relativistic notation, OK? So at each point, and now you have some object with two indices, OK? And you can also have-- later we will see you also have something called spinor fields, alpha. And this alpha is some other indices, which we we'll define later, OK? OK, so you can also have so-called spinor and alpha, some other indices, OK? And as our convention, is that mu is always from 0 to 3, OK? Then when I sometimes I just write x, when I write x just means a four vector, means x mu. You could do Ct and x. So x vector always denotes a spatial vector, OK? And yeah, and then this is the same as Ct xi, OK? So the spatial index is always denoted by i, and yeah. And so partial mu will be the same as 1 over C partial t, and then the derivative, the gradient, on spatial directions. And say, if you have a four-vector, then again, we have the convention that a 0. Then the A vector, it's the same also as A0 and Ai. I use the i to denote the spatial components. And this is the last time I will write speed of light. So the c I will always take to be 1, and h bar will taken to be 1, OK, just for notational convenience. You have questions? OK, so now, so with this preparation then we can talk about action principle for classical fields, OK? So first, recall in your classical mechanics, so we introduce the action, which is the integral of your Lagrangian. And Lagrangian is a function of x, this is your variable, and x dot and the time, OK? And so this is a one-dimensional-- yeah, this is the 8.01 one-dimensional particle motion. And you can also introduce momentum, canonical momentum, which is defined by partial L, partial x dot. And then you can also define a Hamiltonian. And Hamiltonian is related to the Lagrangian by p x-dot minus L, from a Legendre transform. And the equation of motion is obtained by extremize S, OK? So S is considered to be a functional of your trajectory. So whatever trajectory you have, and you extremize this S, and then you get the equation of motion, OK? So now we can generalize to field theory, OK? So now, from principle of locality, so for field theory from principle locality, so the form of the Lagrangian, so the form of the Lagrangian L-- so again, we here, for field theory, we can again define S as a time integral of a Lagrangian. And the form of L is significantly constrained, OK? To have the following form, so you must have the form L, you could choose a spatial integral. So this is an integration over all spatial directions, OK, so this d^3 x and some thing. So let me first write down the notation, and then I will explain the notation. OK, yeah, maybe just write partial i, OK? So let me just explain notation a little bit. Well, first, so here I use a shorthand notation to denote the fields. So phi a so will be a function of spatial direction and time is general field, and a label different fields, OK? So this index, a, labels different fields, OK? So for example, a can label different scalar fields. If you have multiple scalar fields, they can label them. And a can also refers to indices, space time indices, like A mu, and also, yeah, et cetera, OK? So a just label whatever fields you have, OK? And then the second point, you said this L, this script L is a function-- I emphasize here-- is a function of phi a and its derivatives, OK, and its derivatives. So in other words, the L-- so this is a key point, OK? So that this L only depends on the value of phi a and its derivatives at a single point, OK, say x. And then you integrate over all x, OK? So L is called the Lagrangian density. AUDIENCE: So these are all still local? PROFESSOR: Sorry? AUDIENCE: These are all still local? PROFESSOR: Sorry? Say it again. AUDIENCE: Still local? PROFESSOR: Yeah, yeah, yeah, yeah. Yes. AUDIENCE: Can you even higher order derivatives? PROFESSOR: Yeah, yeah, we'll mention that. Yeah, we'll mention that. So here I just explained the notation, OK? So now let me just make some remarks on why the Lagrangian must have this form. So first, the principle of locality implies here must only involve a single integral, OK, because the locality does not allow something like this, say, for example, does not allow a term like this, OK, does not allow a term like this, which involving the phi at a different point, OK? Why? It's because if you have terms like this in your Lagrangian, OK, so later you will-- oh, sorry, L, not L script. So if you Lagrangian have this kind of terms, and then, so as we will describe the equation of motion later, you will see from the equation of motion, then your equation of motion will not be local, OK? So equation of notion will involve the behavior of your field at one point and then influenced by point at some point far away, OK? And then you will not have local. So the locality significantly constraint because if you give away locality, and then in principle your Lagrangian can be arbitrarily complicated. You can have many integrals as you want, OK? But because of locality, you're only allowed to have such simple integral, one integral of a function, OK? So this is the key. OK, so does not allow, so here is a key point. So the second point is that we only allow first derivative in time, OK? We only allow first derivative in time. We don't allow the second derivative in time. So the reason is that, again, as you will see equation of motion, if you involve the second derivative in time in your action or in your Lagrangian, and then when you get the equation of motion, you will get equation of motions involving more than two derivatives in time. So this will lead to-- so this implies the equation of motion only contain two derivatives, two time derivatives, OK? So these constraints come from our experiences. It's the same reason here we only include the first derivative in time, OK? It's because in real life all the experiment is determined by the initial condition. The initial condition you only need to specify the location and the velocity, OK? You don't need to specify more. If the equation of motion involving more than two derivatives, then you need to specify more general initial conditions. And so yeah, so here it's the same thing. We only allow the first derivative in time. But you can, in principle, allow arbitrary number of derivative in spatial direction, OK? But for simplicity, for the most of the time, as we will see, we will restrict to quantum field theory in special relativity, OK? Means that there will be a relativistic invariant, will be Lorentz invariant. And in Lorentz invariant theory, space and time, they can transform to each other, play equal role. So if you only have single derivative in time, you only have single time in spatial derivatives. So the example we will see they will all have only single derivative in spatial directions, OK? But certainly nonrelativistic systems you can have a higher number of spatial derivatives, OK? Good? Any questions on this? OK, good. So now, as in classical mechanics, and now we can introduce the canonical momentum, Hamiltonian, et cetera, OK? So here we can introduce so-called the canonical momentum density. The reason we call it density will be clear. So remember, for this phi-- so x is just a label, OK? So you can just view this theory, essentially it has an infinite number of these such kind of. Yeah, it's an unfortunate notation here, we use x as a dynamical variable, OK? But here the x is only a label, OK? Here x is a label. So here you can just imagine you have infinite number of degrees of freedom. Just each one is labeled by x, OK? So now just imagine here you have many, many x. Just you have some labels for it. And so for each such one we can introduce its momentum, OK? So each phi a we can introduce its canonical momentum, OK, phi_a dot, derivative, OK? Yeah, yeah, let me just-- so defined as the derivative of the Lagrangian density divided by time derivative of phi_a, OK? So this is just the direct generalization of here. So remember, so this is evaluated at each point, OK, in spatial direction. So this thing will also depend on x and t, OK? AUDIENCE: Sorry, what symbol is that you're using to represent the momentum? PROFESSOR: Oh, this is just the capital Pi, capital Pi. OK, and this phi a dot just the time derivative of phi a, OK? And remember, x are the labels, OK? So x does not do anything here. So the reason we call this the Lagrangian density is because-- call it the momentum density is because L is a Lagrangian density, OK, and yeah. And then we can also define the Hamiltonian density the same way, associated with each degree of freedom. So we have to find the script H, which is defined by phi_a dot Pi_a . Again, this is all defined on the same spatial point-- minus L. So this is Langrangian density. And then the Hamiltonian you just integrate over all space, essentially sum over all the degrees of freedom, OK? Just leave them at that. Yes? AUDIENCE: Is it, so are we implicitly summing over the little a's for the Hamiltonian? PROFESSOR: Yeah, exactly. Yeah, yeah, it's just like you sum-- so if you treat the-- yeah, yeah, yeah, yeah, so the a is summed. So I will always-- yeah, I forgot to mention-- good, this is a great question. So I always assume this Einstein convention in the sense that all the repeated indices are assumed to be summed. So no matter how many components are there, you just sum over a. So a is summed here. Good? Other questions? Good? So now we can talk about equation of motion. So it's the same thing as classical mechanics. So classical field theory is just like classical mechanics. You generalize to infinite number of degrees of freedom. So now with each point in space you associate with some degrees of freedom. OK, so now let's look at the equation of motion. OK, so again, you just do the variation of the action, extremize the action to be 0, OK? So the action is, again, just defined by the dt over the Lagrangian, OK? So the S is the dt L and then can be written as the four-integral dt and dx of the density, OK? So now let me just write-- let me now just write it in a more relativistic notation, just assuming there's one time and one spatial derivatives, OK? So the mu-- now, yeah, I combine the space and the time derivative together, OK? And you can straightforwardly generalize to, say, involving more spatial derivatives. Good? So here, so let's just do the variation. So we want 0 equal to delta S. So you want the variation of this S to be 0. So now let's just vary S. So you have d4 x. Now, just remember this Langrangian density is just an ordinary function because of the locality just an ordinary function of phi_a and its derivatives, OK? And we can just do the- variation in a straightforward way. We can just write a partial L, partial phi a and delta phi a plus partial L partial mu phi a and delta partial mu phi a, OK? And in this case, so the delta is a variation and the partial mu. So these two operations are independent of each other. So you can exchange them, OK? So you can put extra delta inside. So this is the same as partial mu delta phi_a, OK? So now you can integrate by parts of the second term, OK, because here is a little bit awkward because involving the partial mu of delta phi a. So again, here the repeated indices, they're all summed, OK? So you should sum them. So now we can do integration by parts here. So then you get the d4 x, and you get partial L partial phi a minus partial mu partial L partial partial mu phi a, OK? And then the whole thing, delta phi a and then path boundary terms. So the boundary terms come from you do integration by parts here, OK? You're doing integration by parts here. So the boundary term will be proportional to delta phi a, OK? Yes. AUDIENCE: So could you define delta? PROFESSOR: Delta's just the arbitrary variation, yeah, just arbitrary variation. Other questions? OK, so the boundary-- so we always assume we have the boundary conditions, OK, and so that the boundary term vanishes, OK? So here I will not go into to detail. So we always choose delta phi a so that boundary terms vanishes, OK? The term comes from integration by parts, OK? And the second, just remember repeated indices are summed. OK? And now, then we don't have to worry about boundary terms. So the boundary term vanishes. Then now we just have this has to be 0. But this has to be 0 for any variation. So this delta x mu can be any function of x, OK? Can be any function of x. The only way this can happen is that this prefactor must vanish, OK? So this implies that the delta L partial phi a minus partial mu partial L partial partial mu phi a, [INAUDIBLE],, OK? So this is the general equation of motion for classical field theory, OK? Good? Any questions on this? OK, so from now on, we will make two restrictions, OK? Just for simplicity, also they're the most-- we make two restrictions just for simplicity. Yeah, most of our discussion can be generalized beyond those situations. Yeah, so the first is that we restrict to field theories which are translationally invariant, OK? Means that there's no special point. Means there's no special spacetime point in this theory, OK? If you do experiment here in Boston, it's the same as you do experiment in Washington DC, et cetera, OK? And if your theory is not translation invariant, and then when you do experiment in Boston, then it will be different from when you do it in, say, in New York. And the second is that we assume it's a Lorentz invariant. So in some condensed matter applications, which you don't have Lorentz symmetry, and then you don't have Lorentz symmetry. But yeah, but similar techniques we are going to talk about can be applied. And we will also elaborate a little bit more on both of these aspects. OK, so now let me give you some simple examples of the classical field series, OK, which satisfy these two condition. Examples, so the first one is the Maxwell theory. So for Maxwell's theory, the dynamical variable, this 4-vector potential A mu, OK? So now I'm using the four-vector notation. So I will often suppress the indices on x. That means this is a 4-vector, OK? And then from the A mu we can define the field strengths. And the E and B then can be obtained from this F mu nu OK, can be obtained from F mu nu And then the action for the E & M then can be obtained by-- OK, that's this, OK? So this is the action for the E & M. without charged matter, OK? So if you do the variation here, OK, to get the equation of motion, then you get the Maxwell equations without sources, OK? You get the vacuum Maxwell equations. You get the vacuum Maxwell equations. And so you see that this action have the form we mentioned here is corresponding to a local theory, OK, a single integral of space time, OK? And the second example is Einstein gravity is another classical field theory, OK? So let me just write down the action. So this is written as G Newton. So G is the metric. Yeah, anyway, if you don't know the Einstein theories, OK? OK, so R is a Ricci scalar. And if you don't know the Einstein series, OK. I'm just write it down just to show that this is a local theory involving only a single spacetime integral of some quantities. And the simplest quantum field theories are called the scalar field theories because-- I think I erased it-- because this involving the vector field, OK? So this is a vector field because at each spacetime point there's a vector. Here, the Einstein gravity, the dynamical variable is a tensor. So it's even something more complicated. So the simplest case is a scalar, which at each spacetime point, you just have a single value quantity, OK? And so let's just consider the simplest case. Just consider a real value scalar field phi, OK? So this is just take a real value. You can also consider the complex value, OK? But the simplest is just a real value. So at each spacetime point you have a single value, OK? And so that's the dynamical variable. And so in real life there are many examples of scalar fields. So for example, the Higgs, OK, the Higgs field, which is a celebrated Higgs field, which was discovered a number of years ago, is a real scalar field. And also pions, OK, the pion particle can be considered as excitations of a scalar field. Anyway, so simplest family of a scalar field action is the volume form, OK? So here I just write this action down. But essentially, you can argue this is the most general action you can write down, which are compatible with locality, OK, with locality and with translation symmetry and the Lorentz symmetry, OK? So here you see the Lorentz indices are contracted. The Lorentz indices in the derivative are contracted. So that guarantees the series Lorentz invariant. And V just some function of phi. V here is just some function of phi. So this is essentially the simplest theory you can write down based on localities, OK? So say we can take V phi, in general, we can take it to be polynomial, yeah, just some function of phi, OK? So when you have a translation symmetry, means that in this theory if you do experiment it's the same everywhere. That implies that V phi-- so here there's no parameter here. V(phi) then must-- yeah, sorry. All parameters in V(phi) must be constant, OK? They cannot depend on space time, OK, must be constant. So if you have some parameters depend on the space time, then of course, different space time point will behave differently. And they will not be translation invariant, OK? So let's try to work out this theory a little bit in more detail. So let's find this Lagrangian density-- let's try to find its momentum and the Hamiltonian. So it's momentum density. Here there's only a single field, so there's no index. So pi x would be take derivative of this. So this is L. So this thing inside the bracket is L, this partial L partial phi dot, OK, time derivative of phi. And so if you look at here, here the time derivative is just phi dot squared, OK, using this four vector notation. And if you expand it, and then the momentum density just phi dot x, OK, because just because partial mu phi partial mu phi is equal to minus phi dot squared plus, OK? And then you can find the Hamiltonian density. Pi phi dot minus L, and then you find that this is given by-- we express in terms of momentum, OK? And the equation of motion, if you apply here, then you find given by partial square equal to partial V partial phi equal to 0, OK? And again, here I'm using a shorthand notation. Partial square is defined to be partial mu partial mu, OK? So this is the same as minus partial t squared plus v squared, OK? So here let me also give you some simple examples of the v. In the simplest case-- so in the simplest case, is you take v just, of course, it will be a constant. But a constant does not do anything because the-- yeah, so the simplest case is just v phi equal to linear term. And from time translation symmetry, this f has to be a constant, OK? It cannot be depend on space time. If this depend on space time, and then you will violate the space translation symmetry, OK? So when you have these, then your equation of motion, given by partial squared equal to f, OK? f is a constant. So in this case-- so essentially you have some kind of thing, which-- some kind of a constant, which is source for this phi. So in this case, we say the system have external force. OK, we call this f external force because if f is nonzero, then phi cannot be 0, OK? Phi has to be nonzero. And so in a sense, the phi will be always be excited. It will always be generate-- nonzero phi will always be generated if s is nonzero, OK? So here we call it the external force. And so normally we like to consider phi-- normally we will consider-- normally we are interested in the situation there is no external force, OK? The system develop on its own. So in that case, so if we forget about the external force, then the simplest situation will be phi equal to quadratic, OK? It's some polynomial, phi squared, OK? So that's the next simplest function of phi. And m squared has to be a constant, OK, again, from the translation symmetry. So in this case, then you have the equation of motion, partial square phi equal to m square phi squared, m square phi equal to 0. So this is a very famous equation. So this is called the Klein-Gordon equation. And the next time you will see very quickly why this is a famous equation, OK? And yeah, yeah, so this would correspond to the simplest field theory we can consider. And then you can consider more complicated field theories with V(phi) to be some higher polynomial or more complicated functions. But keep in mind that this square is special. Because it's square, this is a linear equation. But whenever you have something which is not linear or quadratic, and you will get a nonlinear equation. And then the story becomes complicated, OK? Nonlinear equations are always complicated. And anyway, so this would be the simplest theory we will consider, OK? OK, so let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_7_Interacting_Theories_and_SMatrix.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: OK. Good. So last lecture, we talked about the property of this propagator, which is defined to be x, x prime which is also the same as phi x, phi x prime here. OK? So this object, we discussed. So conceptually, it can be considered-- so you have two interpretations. So one is from here. So this roughly can be heuristically interpreted as the transition amplitude for a particle from x to x prime. OK, so heuristically, you can think of it, say, if you have a point-- say, if you have a space-time point, x prime has another points x. And then you ask, what's the transition amplitude from x prime to x? OK, so one way to interpret this object is that. And as I emphasize, this is only heuristically, because this does not really describe a generally localized state. It's only a approximately localized state, OK? And similarly, the second interpretation is the correlation function. It describes correlations of phi between, say, x. Yeah, just correlations, square correlations between the value of phi at x prime and the value of phi at x. OK, so this is the interpretation most convenient, for example, from condensed matter perspective if your phi, say, describes average spins. OK. So similarly, the G minus, which is defined to be in the opposite order, also have two interpretations. So this is like a transition from x to x prime. Yeah, so if I draw x and if I still draw x that way, so it's like a transition amplitude like this, from x to x prime. OK? So you can similarly interpret the other using this language, for example, this Feynman, which is theta G plus x, x prime plus x minus. And then this means that when t is greater than t prime, say, for now, let's take this to be the direction of time going up. And now if the x has a later time than x prime, and then describe the transition from x prime to x. OK? But if the x prime has a larger time, and x has a smaller time, and then describe the opposite direction from x to x prime. But in both cases, you always go from the smaller time to bigger time. OK? And so we often call this, also GF, the time-ordered correlation function. So the GF is also called-- so it's called the Feynman-Green functions. It's also called the time-ordered correlation function. And then we wrote down the expression. So we calculate the expression explicitly for G plus. And we also almost finished the expression for-- yeah, similarly, you can do retarded et cetera. And also, so any questions on this? Yes? AUDIENCE: So on the set, we showed that for the complex scalar field, phi x phi x' is 0. HONG LIU: Yeah, yeah. AUDIENCE: So what does that mean? There's no commutator? HONG LIU: Yeah, yeah, yeah, because of the charge conservation. And you cannot propagate from one particle to the other particle. So because phi, when it acts on the 0, generates, say, a particle. But when phi acts on the left, it actually generates the antiparticle. So a particle cannot become a antiparticle, right? So only when phi and the phi dagger, and then it's from just the particle. Yeah. Other questions? OK, good. So we also discussed the-- yeah, so we wrote down this formula for R, A, F, say, in coordinate space. So x prime can be written as a Fourier transform. I think it's k squared plus m squared. And let me just check the sign. I think it's i or minus i. Yeah, minus i. So right, OK? And then we mentioned that this factor actually has a pole. In the downstairs, these have the form omega squared-- minus omega squared plus omega k squared if you write it explicitly. And then this quantity becomes singular at-- so singular at omega equal to plus-minus omega k. OK? And the singularity lies on the integration contour if we consider a complex omega plane. And so here, one of the integral is the omega. And then the omega go along the real axis. So the integration goes along the real axis. And here is minus omega k, here is omega k. So you have singularity on the integration contour. And then different ways of going around the singularity then give you different choice of retarded and advanced and the Feynman. And which one to choose based on-- which control you should choose, whether you should go around, go above, or go below the singularity. And depends on the prescription for each of them. So we discussed that for G R, which is proportional to theta, t minus t prime, then that means we should-- yeah, so let me just draw the singularity. So that means you actually need to change your contour to go above the singularity, to go above the both singularity. OK, so that will guarantee you to proportional to theta minus theta prime when you do the integral-- when you do the integral, the omega integral using the contours and using the contours. And similarly, for G A, which is proportional to theta t prime and t minus t prime-- in order to guarantee that, then you have to take your control to be below the singularity, be below the singularity. So this is omega, real omega axis. And for the Feynman-- so the Feynman is half and a half. So there's a theta t minus t prime part-- there's a theta t prime minus t part. OK, if you try to check that-- so we're not going to detail it there. Here we'll give you a exercise for yourself. So if you want to satisfy that kind of condition when t greater than t prime gives you G plus and when t smaller than t prime gives you G minus, when you do the contour integration, you have to do the following way. So for this one, you go below, for this one you go from up. OK, you go up. And then there's another way, which is G F tilde, which is the opposite for G F in terms of time-ordering. And then you will do the opposite way over here. OK, so altogether, there are four different ways you can go around the singularity that give you four different kind of functions. OK, they give you four different functions. So for those of you who are not familiar with complex analysis or the contour integrals, try to refresh yourself a little bit. Because the complex analysis is definitely a very, very powerful tool. To be very familiar with it is very important. OK, so it's allowing-- each time, if you have to specifier your contour like this-- so if each time you have to specify a contour like this, it's not convenient. And the physicists developed a more convenient way to treat all of them in one shot. OK, so instead of going around the contour like this, we just try to move the singularity. OK so we keep the contour to be fixed, but we slightly move the singularity. OK, for example, for the retarded, this is the same if I just slightly move the singularity by a tiny bit below the real axis. So if my integral contour is like this, then this is equivalent to that. If this is just a tiny part, then this one won't affect your answer. OK, and similarly here, in this case, we just move both singularity above a little bit. And in this case, I move one singularity above, move one singularity down, in which case I don't have to change my contour, just slightly change my singularity. And this can be achieved by so-called epsilon prescription. So a convenient trick is in the-- for example, in the retarded case, instead of this-- so we always do the integral with the same contour. So we don't change the contour, but we slightly change the integration to be omega plus i epsilon squared plus omega k squared. OK, so the downstairs previously was minus omega squared plus omega k squared. But now, I add a tiny-- so epsilon is infinitesimal. It's positive and infinitesimal. So you see, by adding such a tiny piece to here, when you solve this singularity-- so it's now omega plus i epsilon equal to plus minus omega. And then move the epsilon to the other side, and then the solution becomes minus i epsilon. And then because epsilon is positive, then this is slightly in the lower half plane. And similarly, for the advanced, you just do a minus, OK? And then both singularity will move up. And for the Feynman-- so the trick is you can do the same thing. Yeah. Sorry, I still have i k. So this is the minus i. So I just do k squared plus m squared, the whole thing minus i epsilon, rather than change the omega, and the whole thing minus i epsilon. And you can check yourself. When you solve the omega using this, this is the whole thing minus i epsilon, in this case. So for retarded and advanced, you put it inside the omega squared. But here, you put it other side, OK? And then you can check when you solve for omega. Actually, you get this situation. One move up-- it will actually move up and the positive move down. OK? Yes. AUDIENCE: Can you outline how I would convince myself that these two pictures are equivalent? Are they going-- moving, integrating with the contour that jumps out over or under these poles for-- HONG LIU: Yeah, yeah. Yeah, because from the contour point of view, from the control integration point of view, the precise shape of the contour is not important. It only matters what singularity it encloses. So as far as it encloses these two singularity, and then the story is the same. But since I changed the singularity location only by epsilon-- and in the end, I will take this epsilon go to 0-- and then the value will be the same. Other questions? And so this is a very, very simple, but very powerful trick. This makes things much more convenient when you do many manipulations so that you don't have to constantly worry about the contour, the precise contour you take. Any other questions? Good. Good. So let me just give some final remarks. So this almost concludes our discussion of the free field theory. So let me just give some final remarks. So as I mentioned, the G plus and the G R and the retarded have many applications. Saying, in condensed matter. So you deal with them-- you deal with them all the time. And then this Feynman propagator, we will see it plays a very important role in next chapter, which we will start in the next few minutes. So the G F will play a very important role. And so in addition to these two points-- so these are the correlation function of two points, OK? Just the x and x prime. You can, in principle, consider, say, more general correlation functions. Say, you can say, consider an arbitrary state. Say, for example, you can just consider some state psi, and then you can consider phi x1, phi x2, say, phi xn. So this is some n-point function. Yeah, what's normally called n-point function, some general state, psi. Psi does not have to be the vacuum. So general state-- remember, general state psi can all be built from a and a dagger. And the phi can also be written this way, in a dagger. So if you give me a state-- so I can, in principle, calculate this for any correlation function. OK, so this theory is fully solved. And then no matter what you want me to compute using what we have developed so far, you can now compute them. So this theory, everything is computable. And so now, let me just mention one simple fact regarding the vacuum correlation function. So let's consider the n-point function of vacuum. And then you can easily convince yourself-- OK, you can easily convince yourself this factorize into sums of product of two-point functions. So this is simple because a and a dagger, they have to be paired. Because any a dagger creates something out of 0 has to be annihilated by a somewhere later. So that means all the a and a dagger has to be paired. And then that means that they have to reduce to all the two-point functions. OK, so you can check explicitly. So for example, the simplest is the four-point function. So if you consider phi x, OK? So you can easily convince yourself that the a dagger-- when a dagger of this creates on this, and then that can be, say, annihilated by any of them. And so you can have, say-- and similarly, a dagger created by this one have to be annihilated by some later. So you have to always pair them, OK? So you can show that this just reduces to all possible pairings. So you can pair 3 and 4, or pair 1 and 4 and 2 and 3, or 1 and 3 and 2 and 4. OK, you just write down all possible pairings. OK, just write down all possible pairings. So in free theory-- so in this free theory, any point functions in the vacuum, they just reduce to the two-point function. If you know the two-point function, you know everything. OK, you know everything. So this, I will give you as an exercise to check yourself. And then the final remark is related to the problem 3, I think, in your Pset. So in addition to phi, we can also consider more complicated operators. We can consider, say, so-called composite operators. Means that we take the product of phi, of which is momentum. So we can have-- say, we can consider phi squared, phi cubed, say, the Hamiltonian density, the stress tensor. So these are all involving some product of phis. Or its canonical momentum. And so these are normally called the composite operators. And as you should have learned from your P set problem 3 that those operators often are not well defined. Because when you multiply them at the same point, you will get a singularity. You will get singular behavior. You will get singular behavior. And the really to renormalize them, et cetera. OK? And so such kind of divergent behavior is generic in quantum field theory. And one of the most important parts of quantum field theory is to find sensible ways to renormalize those divergent quantities in the physically sensible way. And the problem 3, it's the first example you encounter. And later, when we deal with interacting theories, there are more complicated examples, et cetera. OK, so do you have any questions? Yes? AUDIENCE: So I'm a little confused on how to interpret this n-point correlation function. Because for the two-point, we have this heuristic condition amplitude. HONG LIU: Right. AUDIENCE: But how do you interpret this, especially with theories where it's not a free scalar field where you can't decompose them like this-- HONG LIU: Right. AUDIENCE: --by being expressed in terms of ladder operators? HONG LIU: Yeah, yeah. So just heuristically, you can imagine-- say, you can imagine these correlations. So, say, you make measurements at different points. And you study some kind of joint amplitude of them. Yeah, so this is one way to interpret this quantity. AUDIENCE: Like, the value of the field? HONG LIU: Yeah, yeah. For example, if you imagine they are spin operators. And say, phi denotes some kind of spin, and then you can imagine you measure them at different space time points. And you look for their correlations. Yeah. Yeah, so this is the one example. But later, we will see-- we will actually almost see immediately-- yeah, just today, I think we will reach there. And you will see this actually is related to the scattering amplitudes when we talk about interacting series. Yeah. Other questions? OK, good. Good. So try to enjoy the problem 3 in your Pset. And after doing it, really read the problem again, OK? Read the problem again. That problem contains the things which-- yeah, just the simplest situation for the divergences you will see in quantum field theory. And to be prepared for that is important. Yeah, even though we will not directly use the result of the problem 3 in the immediate future, but I think it's important as part of your education to get used to the infinities and the divergences in quantum field theory. OK, good. Other questions? Yes? AUDIENCE: So I guess over here, everything on the left hand side is, in terms of these Green's functions, which could have been solved classically without doing any quantization. So is there a reason why the correlator and the quantized field theory is the same as the correlator in the the classical? HONG LIU: Yeah, just because it's a very simple-- this is a very simple theory. Yeah. Yeah. Yeah, but even in this case, it's actually-- yeah, it depends on how you interpret the behavior of those functions. Even though this form, you might say, oh, maybe we could anticipate this. Say, if you have a, say, Gaussian statistical system, you will anticipate this. Just say you only have Gaussian fluctuations-- you only have Gaussian correlations and nothing else. But the specific form of the functions, they do encode the quantum physics. And even though this factorized form is like in-- yeah, say, like in the Gaussian statistical theory. Yeah. Good? OK, so so far, we discussed the simplest free theory, just the theory of free scalar particles. They don't have spin. They have spin 0, so they don't have any space-time component. And they have some mass, relativistic particles. And later, we will talk about electrons, which have spin 1/2, and talk about photons, which have spin 1. And before talking about them, let's talk a little bit about interactions. Because so far, this theory is absolutely boring. So even though this theory illustrates this very important conceptual point that somehow when you quantize the fields you can get particles. OK, you can get arbitrary number of particles. So it's a connection between the field and the particles. But just in terms of theory of particles, this series is absolutely boring. There's nothing-- the particles don't do anything. They just go straight, which is Newton's First Law. They just go straight like Newton's First Law. And so now, let's try to add a little bit fun to introduce a little bit interactions. And you will see that when you introduce a little bit interactions, the story becomes much richer. So now, let's talk about interactions. It turns out to describe interactions, a very powerful approach is the path integral. OK, so we will also discuss the path integral. So we will use the path integral approach to describe how to treat interactions. And yeah, so first, let's just say a few words about some general remark on interacting theories. So previously, we consider theories with a Lagrangian density of the following form. And we find it's a free particle. It's a theory of free particles. And now, we can easily make this theory to be of interacting particles if we just add some higher order powers. So for example, one of the simplest power is phi to the power of 4. So yeah, you can also add phi to the power of 3, but phi to the power of 3 is a little bit sick. It's because the phi cubed doesn't have a definite sign. I don't have a definite sign, and so phi cubed, say, does not have a well defined energy. So your energy can go arbitrarily negative. And so this is the simplest interacting theory with a well defined energy. So we will write this as L0, which is the free theory part, which we can see there, and then the interacting part, OK? This lambda factorial, phi fourth. And so this lambda, essentially heuristic, this lambda should characterize the importance of this term. So if lambda goes to 0, of course, then this term goes away, and then this term is not important. And when lambda becomes bigger, then this term becomes more and more important. And so we call the lambda to be the coupling, which characterizes the importance of this term. So now, first, let's follow what we were doing before for the general quantization procedure. We tried to write down the classical equation of motion, and then we quantized-- and then we solved the most general classical equations, the most general solutions for the classical equations. And then we promote those solutions to operators, right? Yeah, so let's try to do this one. So now, when you write down the equation of motion-- so the equation of motion is very simple. You just get partial square phi, m square phi. So this is the free theory part. And now, you have a nonlinear part, the cube, OK? So now, you have a nonlinear equation. So who can solve this equation? Any volunteers? It turns out actually this equation, nobody has been able to solve it so far. OK? So we don't know how to solve it. And so our previous strategy, even though sounds very powerful and immediately it breaks down, OK? Because rely on you can actually find the solutions to this equation of motion. And yeah, so we don't know how to solve this. But heuristically, you can imagine that this, indeed, should be a theory of interacting particles. Because previously, when you don't have this term, you just have a linear equation, then the phi is not doing anything. OK, now, you have a phi cubed term. So that means that the different phis can now doing something together, OK? So when different phi's doing something together, by definition, that's called interactions. OK, and that's called interactions. And so just heuristically, this should already-- this should give you a theory of interacting particles. OK. So we will talk about how to treat such a theory. Since this cannot be solved exactly, we can only try to solve it approximately. So the only way we know how to solve it approximately so far is to imagine the lambda is small. And then you just expand the lambda perturbatively. Just treat the lambda as a small parameter, you just expand the lambda. And so I will outline this procedure in a little bit. But before that, let me just make some general remarks, which apply to general interacting theories, which this is just one of the examples. So actually figuring out interactions from experiments-- it's actually one of the main tasks of the particle physics and many areas of condensed matter physics. OK, and so for example, we build LHC at CERN and you collide these protons together, billions of dollars. The whole purpose is to figure out the interactions between the particles. Between the elementary particles and to verify them, et cetera. And similarly, with many condensed matter experiments. So now, I'll ask you a simple question. So what's the most powerful way, experimentally, to probe the interactions between particles? What is the word to describe that kind of experiment? What? AUDIENCE: Scattering. HONG LIU: Yeah, scattering. So you should already have learned this in quantum mechanics. So scattering is the key thing, OK? So that's essentially the universal approach we have been using for more than 100 years, starting from Rutherford, who first shoot the helium atom at the-- who first shooting electrons at the helium atom. OK? No, no, no, no, no. Shoot the alpha particles at the-- alpha particles at atoms. And so the scattering-- and then, so you have Rutherford, and then you have the so-called deep inelastic scattering, DIS, which you figured out that the proton actually has substructure, which leads to the discovery of quarks. And actually, MIT played a key role, Friedman and Kendall, in the DIS experiment, which figured out the structure-- which discovered that the quarks were-- yeah, they played the key role in that. So and still, many, many interactions and the particles were discovered this way. And so the basic idea of scattering-- so essentially, it's very simple. You just collide a bunch of particles, OK? And just examine the outcome. OK, so from the outcome, you try to deduce what are the interactions between those particles? OK, so that's what the experiment-- scattering experiment about. OK, so from here, you deduce the interaction, which has always been very successful. So in condensed matter, you have neutron scattering, et cetera. X-ray, you can use X-ray and the neutrons to shine on your samples. And one of the most important observables for the scattering experiment is what? Again, this is a simple question of quantum mechanics. AUDIENCE: Momentum? HONG LIU: Hmm? AUDIENCE: Momentum? HONG LIU: Sorry? AUDIENCE: Momentum? HONG LIU: No, momentum is not that-- AUDIENCE: And cross section. Cross section. HONG LIU: Cross section is close, but there's a more fancy word. AUDIENCE: [INAUDIBLE] HONG LIU: Hmm? AUDIENCE: Differential cross section? HONG LIU: That's fancier, but not fancy enough. It's called S-matrix. AUDIENCE: No, I said S-matrix, but I didn't say it loud enough. HONG LIU: Oh, OK. Yeah, just be brave. Be braver next time. So one of the key elements-- yeah, so one of the key observables for scattering is a so-called the S-matrix. OK, to define S-matrix, we need a little bit of mathematical idealization, just as always in physics. So the mathematical idealization, in order to define S-matrix, is the following. So at t equal to infinity, t equal to minus infinity, prepare initial state-- yeah, initial state may be the initial state as localized, say, wave packet infinitely far apart. So this is the mathematical idealization part, OK? We want them to be infinitely far apart. But you aim them, OK? So you prepare them spatially infinitely far apart, but you aim of them-- you prepare their initial momentum so that they can come together at some point. You aim them. And then they will come together at some point, and then they will scatter. They will interact with each other. And then they will form some final product, and then the final product will then run away from each other, OK? They will have velocity momentum. They will run away. And then you just wait. Wait until the products are far apart, then you match with them. So mathematically, this is called t equals infinity-- yeah, of course, t equal to plus-minus infinity is also a mathematical idealization. And the final particles are also far apart. Again, infinitely far apart. Because they are all relativistic particles and they all move. And even with slightly momentum difference, eventually, they will be-- if you wait long enough time, then they will be-- so this final path are very important, these two, because this means we can neglect the interactions. OK, as t goes to plus-minus infinity. OK, so with this set up, and then t equals plus-minus infinity, they are very far away. And then you can neglect the interactions. This is important. Then we can identify each particle, because when they are together, they all interacting with each other, it's not easy to cleanly identify each particles and discuss their properties. And so now, we can neglect the interactions. So the initial and final states, and then a collection of free particles. And they can be described by free theory. So this is very important because remember, we cannot solve this kind of interacting theory exactly. So now, but we can reliably almost to infinitely accuracy if you talk about the initial state and the final state. And we can talk about them as precisely as we can, as far as we go to t going to plus-minus infinity. Yes? AUDIENCE: So how do you know adding a phi to the fourth term leads to interactions? Just because its the simplest term that [INAUDIBLE].. HONG LIU: Yeah, just in principle, any nonlinear term will lead to interactions. Yeah, the intuition is that anything beyond the linear level, you will have different phi's coming together. So whether it's phi cubed, or phi four, so higher powers or exponentials. Yeah, just anything which is not linear phi's have to do something with each other, and that just means interaction. Yeah. Yes? AUDIENCE: Why are you doing the interaction from the minus one half m squared phi squared term? HONG LIU: Sorry, say it again. AUDIENCE: So we have-- HONG LIU: Oh yeah, yeah, yeah. Yeah, this does not give you interactions because this just-- remember, when you do equation of motion, that still gives you a linear term. AUDIENCE: Oh, OK. HONG LIU: Linear term just give you a mass. AUDIENCE: [INAUDIBLE]. HONG LIU: Yeah yeah, yeah. Right, right. Yeah, yeah, yeah, yeah, yeah. Yeah, in the Lagrangian, the quadratic term, a free term. So since above the quadratic term gives you the interactions. Yeah. Yes. Yes? AUDIENCE: If I know what the interaction potential is to the particles, like gluon or something like that, how can I translate that into-- HONG LIU: Yeah, yeah, yeah. Here you can not translate. Here, I'm just giving you the intuition that somehow when you add such kind of nonlinear term that should lead to some kind of interactions. But to understand what precisely kind of interactions, you have to look at physical observables. So that's why we need to look at the S-matrix. Even for this theory, we want to say, suppose we have this theory, but in real life, how do we see in the real experiment? Yeah, if this theory can be realized in the real experiment, what it looks like? And the only way to see it is through this S-matrix. Yeah, yeah. And so this S-matrix, and then yeah, we can deduce back. Good? OK, good? So this is important because this makes sure that even when we're unable to solve the theory exactly, we can still talk about initial and final state exactly. OK? At least we can define our problem precisely. So let's denote the collection of initial particle to be alpha, the momentum of initial particles. So essentially, this characterizes your initial state. And because here we're talking about scalar particles. They don't have spin, so the momentum, they are only quantum numbers. And say I use beta to describe their final momentum. I call them p 1 prime, say p n. So the initial number of particles and final number of particles don't have to be the same. And then the scattering process is just to go from alpha to beta. OK, you start a bunch of particles at t equal to minus infinity with those momentum, and then at t plus infinity, you observe some other particles with those momentum, OK? And then we want to understand the transition amplitude between them. So that's why the experiment matches. And from here, of course, you get the cross section, et cetera. But this is a more fundamental object. So to write this precisely-- so essentially, we are looking at, say, in the Heisenberg picture, this is my final state. And I look at the evolution from minus infinity to plus infinity, start from initial state alpha. So this is the evolution operator of your quantum system. And so sometimes, we also write this as the-- so this is in the Schrodinger picture. And in the Heisenberg picture, we write it as this. So now, this means that this is the state defined at the minus infinity, this is the state that's defined as plus infinity. So this is the notation in the Heisenberg picture. And this is the notation in the Schrodinger picture. OK, so this object we often denote. So you can worry-- you can run alpha and the beta over all possible initial states and all possible final states. And then that forms a big matrix, in fact infinite times infinite matrix. And so this is our S-matrix. So essentially, all your secret of interactions are encoded in here. Yes? AUDIENCE: Why is it an infinite dimensional matrix? HONG LIU: Yeah, because you can-- yeah, so you have a value for any choice of alpha and the beta, right? And so any choice of alpha and the beta is one matrix element. But you have infinite possible choice of initial momentum. Yeah, you can have an arbitrary number of particles, you can have a arbitrary momentum. And similarly, in principle, for the final state. Yeah. AUDIENCE: And all this is one kind of particle? Or how do you know? HONG LIU: Yeah. Yeah, here, for this particular problem, it's one kind of particle. But this definition is very general. This is a very good question. It doesn't matter whether you have one kind of particle, two kinds of particle, even an infinite number of particles. Yeah. An infinite species of particles. So here, even for one kind the particle, I have infinite possible choices because I have infinite possible choice of all those momentums. I can choose k equal to 1, 2, 3, to infinity. And also, the value of those k can all change. P can change. So yeah. So this is the infinite times infinite matrix. OK, so essentially, this S-- this is essentially just a matrix element of this evolution operator. OK, essentially, this is the matrix element of the evolution operator. You can see it from here explicitly from minus infinity to plus infinity. OK, good? So it's convenient-- so let me just say a few things about the S-matrix. So it's convenient-- so when the interaction is weak-- so when the interaction is weak, you can imagine, most of the time, the particles, they don't interact with each other, OK? So when you do the scattering, you put a bunch of particles scattering-- if the interaction is very weak, then they don't actually interact very much. So we expect-- so when the S equals to identity, then that means there's no interaction at all. OK, for free theory, it will be just the identity. So now, if you're seeing that the interaction is weak, and then it's convenient, we write the S-matrix in the following form, i T, and this includes the interaction part. OK, captures interacting effect. OK? Good? And by definition-- yeah, any questions on this? Yes? AUDIENCE: So I was interpreting the elements of the S-matrix as sort of the probability to end up in a certain state from the initial state. HONG LIU: Yeah. Yeah, S-matrix is the amplitude, the probability you have to square it. AUDIENCE: Oh, yes. HONG LIU: Yeah. AUDIENCE: But then why does it make sense that if it's a weak interaction, that it would-- like, as the limit goes to no interaction, the S-matrix would be normal at the identity? HONG LIU: Sorry, say it again. AUDIENCE: So why do we see that for weak interactions, it's the identity? HONG LIU: Yeah, if there's no interaction, it will be identity. AUDIENCE: But why does that make sense? Because that's just saying the amplitude is 1 to end up in a certain state? HONG LIU: Yeah, just you always go back to-- the state does not change. What it means is that-- it means delta alpha beta. Because alpha must equal to beta. AUDIENCE: Oh, I see. I see. HONG LIU: Yeah, what it means is just delta alpha beta, right? So it means the alpha equals to beta. AUDIENCE: Thanks. HONG LIU: Yeah, yeah. Yes? AUDIENCE: Is S a Hermitian Matrix? HONG LIU: Yeah, we will talk about that. It's not Hermitian it's unitary. Yeah. Yes? AUDIENCE: Does the phase of the entries in S carry any physical consequences? HONG LIU: Yeah, that's a good question. So if you can see the particular amplitude when you square it, of course, the phase don't. But in principle, you can design the different-- you can design experiments in the different things they can interfere. Yeah. Yeah, so in principle, they contain physical information. Yeah. Yes? AUDIENCE: Why is this entry complex? HONG LIU: Or just convenience. AUDIENCE: OK. HONG LIU: Yeah, just for convenience. Yeah. AUDIENCE: Yeah, if alpha was a particle at minus x infinity, beta was the same particle at plus x infinity? HONG LIU: Sorry, say it again. AUDIENCE: If my alpha to beta was just particle here goes to particle there, is that captured by S equals 1? HONG LIU: Yeah. Yeah, yeah. AUDIENCE: OK. HONG LIU: Just say in the free theory, if you start with a bunch of particles at p 1 and p k then you end with the same p 1 p k Yeah. Yeah. Yeah. OK? So due to translation symmetry-- so we expect energy momentum conservation. OK, so yeah, so this, if you write it in terms of the component rotation, so this is delta beta alpha plus i T beta alpha. OK, and i just purely convention. i is purely convention, just for convenience. So due to the momentum conservation-- so this corresponding to the interaction part. So in this part, we expect alpha not equal to beta because you have interactions. But the total momentum must be conserved. So the sum of here must be the sum of here. Just the total momentum must be conserved. So that means this scene must contain a delta function between the total initial momentum and the total final momentum. And so we can just convenient to extract-- to separate that delta function. OK, so convenient to write i T beta alpha equal to i, you separate out this delta function. Because we know the delta function must be there, so p alpha minus p beta. So p alpha and p beta means total momentum for alpha and the beta. And then, you have M alpha beta. So this is the nontrivial part. So this part is trivial. You expect from kinematics. And so this M alpha beta is called the scattering amplitude. So now some properties of the S-matrix because you see that S-matrix is essentially just a matrix element-- essentially just the matrix of the U. And the U is a unitary matrix, because it's a evolution operator. OK, so that means that the S-matrix is unitary since U-- this is a unitary operator, so that means that S is a unitary matrix. Also, you can show-- somebody be running out of time because today, we have to end at 5:40. And so for any symmetry of the Hamiltonian-- for any symmetry of the Hamiltonian, that means that if you have some Q-- yeah, if you have some, say, unitary operator, which commutes with the Hamiltonian. OK, so remember all the symmetries, the interaction are generated by unitary operators. OK, so suppose we have some unitary, which commutes with the Hamiltonian. Then, you can easily show yourself-- so I'll leave this exercise to yourself. So lambda is a unitary operator generating some symmetries. OK, so for example, for the Lorentz transformation would be like this e to the i omega M, which you have done in your homework. And then it will satisfy-- then when you act initial state and the final state by this lambda, and then the matrix should be the same. So you can easily prove this by using the commute. OK? Yes? AUDIENCE: Is another equivalent way of saying that like S and lambda also commute? HONG LIU: No. No, no, no. No, they don't. Yeah. Yeah, that's the statement, right? This is a statement. So as S-matrix element. Yeah. Yeah. Yes, it's a statement that the lambda commute with U. AUDIENCE: OK, OK. HONG LIU: Yeah, lambda commute with U and S is the matrix element of U. AUDIENCE: OK, OK. Thank you. HONG LIU: Yeah, yeah, yeah. OK, so this is the key object we want to calculate in the interacting theory. So now, without proof-- because the proof requires something going beyond what we discussed so far. So let me just code this LSZ theorem, which said that the M alpha beta, this scattering amplitude, can be obtained. So we will discuss how you obtain in nature from such correlation functions, time-ordered correlation functions. So this omega is the vacuum for the full interaction theory. OK, so this is the vacuum. So I write omega to distinguish it from 0, which is the free theory we discussed earlier. And this time-ordering means that you order the operator according to their time. So whichever time is bigger sits earlier, OK? So this is the object, key object, if you want to calculate the M alpha beta. And you have to calculate this object first. OK, and so I will stop here. And so next time, we will say a few more words about how the general approach to treat this kind of theory, and then we will start talking about the path integral. So have people studied the path integral before? So you are a expert of path integral or not quite? AUDIENCE: Expert is a strong word. AUDIENCE: I know we've heard of it. HONG LIU: OK. Yeah, so I will review the path integral of-- yeah, I will actually-- so I will actually introduce the path integral from scratch, but I will do it a little bit faster than ordinary quantum mechanical class. And even for people who have not seen it before, I think you should be able to follow with a little bit of effort after the class. OK. AUDIENCE: Good point. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_9_Path_Integral_Formalism_for_QFT_Computation_of_TimeOrdered_Correlation_Functions.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So, last time, we talked about path integral formulation of quantum mechanics. So let me just quickly remind you the main idea. OK, so classically, if we specify the initial location-- say, if you add t prime, x prime, and then the final location, t and x so at time t, you add x, and then there's a unique trajectory, OK? And there's a unique trajectory. So, quantum mechanically, you ask a different question. You ask, what is the transition amplitude if you say, at t prime, you add location x prime? And what's the transition amplitude at time t you add the location x? And then the answer for this question, now, is equal to-- you just sum of all possible paths between t prime and x prime and t x, OK? So you just sum of all possible paths. And we sum weights, OK? So, more explicitly-- and with the weight, you sum of all paths. And the weight is given by exponential i h S. So, for each path, you have an action. And then you just evaluate, for each path, the action. And then, here, also, the explicit h bar here and because the action has the unit of h bar. Yeah, so this would be the quantum mechanical answer, OK? So this is a formulation of quantum mechanics using path integral. And so we can write this more explicitly, mathematical notation for the summing over paths is the following. So we introduce a notation like this, which you integrate over all possible trajectories between x t prime equal to x and x t-- x prime and x t equal to x. And then you-- with weight S x t. And the x t is a function of x t which you integrate from t prime to t. You have t double prime. So 1/2 m x dot squared minus V x, OK? So this is for the one-dimensional particle. And, also, this-- so all the trajectories-- this fixed endpoint at t prime should be equal to x prime, and the time t should be equal to x. And, also, this integration is a shorthand notation for limit. So this DX(t) should be understood as a limit, when you take angle to infinity, this factor, and divide it by 2 pi i delta t. So, essentially, you divide your paths by intervals of delta t. And then you integrate over all possible. Yeah. OK, so you separate the paths from t, t prime to t by n segments. So this is the t i. And then the location will be x i. So this is t i, and location-- and then you integrate the x i over that, OK? So this is the formulation of path integral for quantum mechanics. Any questions on this? Yes? AUDIENCE: Can you do this computation in some arbitrary basis that's not necessarily a position basis? PROFESSOR: Yeah. Yeah, in principle, you can also do this in momentum basis-- yeah, momentum space. And, also, you can actually also generalize-- yeah, we will talk about how you generalize this to other basis. But this is the fundamental formulation. And when you write it in other basis, essentially, you reduce it to this basis because this is the most intuitive way to formulate it. Any other questions? OK, good. So a simple example is just a free particle. When you have a free particle, then the S is particularly simple. You just-- integration of 1/2 m x squared, OK? So we described that when S is equal to this-- and then this is like a Gaussian integral, OK? Because this is quadratic in x, so this is like a Gaussian integral. And so you can reduce-- so, in this case, you have a Gaussian integral. Then, schematically, you have something like this. You can-- when x is of this form, you can write it in this general form. You can write the path integral in this general form. And the k, in this case, will be some delta function. Yeah, I wrote down k last time explicitly, and then this is like a Gaussian integral. And in the Gaussian integral, we can just directly write down its answer under some constant divided by det of k. So this determinant is defined in the spatial functions and C is some constant. OK. So you may also view this just as a lateral-- this also just follow from this continuum limit, OK? It follows from this discrete case, which you take a continuum limit. Yes? AUDIENCE: So when you evaluate that Gaussian integral, there's an i in front in the exponent. PROFESSOR: Right. AUDIENCE: How does this converge? PROFESSOR: Yeah, yeah. This is the same as you do the standard Gaussian integral with the i. Yeah, that integral, you can also do. AUDIENCE: But shouldn't this diverge if it's just-- PROFESSOR: No, it's a-- yeah, just with a single-- so this integral, we can define, right? Yeah, this integral-- we know how to do this integral. It's the same as this one. Yeah. So you have to do a little bit mathematical trick to do this integral. But this integral is defined. We can give a value. Other questions? OK, good. So this is the path integral for quantum mechanics with one degree of freedom. So, now, we can just immediately generalize this to more degrees of freedom, OK? And there's nothing really change. When you have more than one degree of freedom, you just integrate over-- say, if you have, say, three particles, then you just integrate the D X 1, dx2, D X 3 and then use the same action, OK? So it's just straightforward, generalize. And then we can generalize to field theory because field theory, we just take the numbers of degrees of freedom to infinity, OK? So, we can now-- yeah. Let me just-- so to field theory-- So, in this case, we just need to replace in the different places by-- so, remember, so the dynamical variable for a single particle-- you have this operator. And then when you go to field theory, the counterpart of this is the phi, is your field variable. And then-- so this quantity-- when you go to field theory, the corresponding quantity is phi x at some t and phi x-- say, phi prime x at time t prime. OK? Again, the way to think about this is just you should view this x as label. And then this is just as if we have infinite number of positions, OK? You just have infinite number of phi's, which are eigenvalues of this operator at time t, at t prime, and, similarly, here, OK? So, again, this one-- you can write it in the form in terms of the position eigenstate-- so this is the position eigenstate in field theory. OK, so this is the position of eigenstate in field theory. And then we can use the same technique we discussed last time. You just split this into many, many infinitesimal pieces by splitting the time interval from t prime to t and split into many pieces and then just do it over and over and insert a complete set of state of phi, OK? And, now, you just-- so H is now, of course, is the field counterpart of it because H-- you could, too. But the details of this does not matter. Say, this is the Hamiltonian. You can just insert this into here, and then you just repeat the same procedure, OK? Just keep in mind, now, you just have infinite number of degrees of freedom, OK? That's the only difference. Your rotation becomes a little bit more complicated. But if you keep a very clear mind, with the correspondence with the single particle case, then everything is the same, OK? If you understand how to translate the rotation here to the rotation here, then everything is just exactly the same, OK? Just everything goes through. So I will not repeat that procedure. And, now, you just find this quantity. Then, again, we can write this in terms of path integral. So, now, this can be written as in terms of path integral of phi. Now, you integrate over all possible configurations of phi with is the boundary condition phi t prime x equal to phi prime x. And the phi t x-- equal to phi x. And, again, you just have exponential i S phi. And S is the action for phi. So, now, the S-- is now-- so, in this case-- yeah, it's an integration of the Lagrangian density. And L is 1/2. Yeah, it's just what we wrote down before. OK, some potential phi-- OK. So for any scalar field theory, these just work identically, OK? And, again, you just get the integration over the Lagrangian, which is become four integral of the Lagrangian density, OK? Any questions on this? Good. So whenever you get confused about path integral in field theory, then try to translate into the language of a single particle by doing this kind of replacement. And then you will be able to settle your problem. And then whenever you get confused about the path integral in this quantum mechanics, then just reduce it by a finite dimensional integral, and then you should be able to understand, OK? Just always reduce it to the simple case. And, often, your confusion can be understood in that simpler case, OK? Yeah. Good. So, now, let me explain a little bit what this notation means, OK? So this said in words means you integrate over all possible configurations of phi between t and t prime. And the final configuration-- the initial configuration to be phi prime, and the final configuration to be phi-- phi x-- and the initial to be phi prime x, OK? So let me just say, previously, DX(t)-- if you think about it in terms of discrete case corresponding to-- you just sum-- take the product of all the o at different time d x i, OK? So in the continuum limit, it's like you just take the product of all possible value of t and then integrate over the value of x at that particular value of t. So that's the meaning of this DX(t), which is up to a prefactor corresponding to that. So, similarly, phi is the same thing. So, now, remember, phi x has now become the dynamical variable. So this now becomes sum over-- take product over t. Oh, yeah. No, no, no. First, you think of take product over x. So this is a label of x. So x are the labels of phi, and you take all possible value of x. And then each of them is just like a-- OK? So just like you have so many different variables, each x label one degree of freedom, OK? And then you have phi t. And, now, then you use this one. Then you have t x d phi. Is it clear what this equation means? So this step tells you that you have many, many different degrees of freedom. So this step is just enumerate all possible degrees of freedom which are labeled by x. It's just like your standard integral. If you have five-dimensional integral, and each-- and then you just have a product of five different variables, OK? Here, we have all different possible value of x variables. And that variable is a function of t. And then, now, we use this equation. And each of them is like here, OK? And then, now, you have the product of all possible value of t. And then you integrate, then, all possible values of phi, say, at point x and t, OK? But, remember, t always is between t prime and t, OK? Yes? AUDIENCE: If I'm understanding your notation for the first equation, the second equality-- what is plugging in-- PROFESSOR: You mean here? AUDIENCE: The one above it. No, the one above it. PROFESSOR: Yeah, yeah. AUDIENCE: So that second quality-- so when you relabel i to the t, that's-- PROFESSOR: Yeah, yeah. Yeah, here, I just write it in the continuum form. So, here, I labeled it by t i. So each location t i, I have d x i in that form. But if I go to continuum limit and go to infinite limit, essentially, at each point t, I have integration of x. And so that's roughly the continuum form of that. And, here, it's similar. OK? Good. So once you have learned this trick, do the reduction, and I think you will be able to settle all your confusions about these definitions, OK? OK. So, other than that, other than this additional label x-- so other than additional label-- I should call it here label x. So the field theory path integral for a scalar field is essentially identical to that in quantum mechanics, OK? So just to emphasize this point. So, here, actually, you can define this general. It doesn't matter what your V is, OK? You can choose whatever V you want. The story does not change. Just like when we do the single particle case, it doesn't matter what this V is, OK? You can choose arbitrary V, if you want. But if we choose V to be-- when v is for the free theory, then-- in this case, of course, a particularly simple. And then, in this case, the path integral is again Gaussian, OK? The path integral is, again, Gaussian. I will not write it again. So, again, that formula-- and this formula applies just this case, in a more complicated space. K is the operator in a more complicated space, OK? So, now, you not only have the space of t, but you also have the space of x. But once you generalize to some space of functions, it doesn't matter. This function becomes more complicated, OK? And so, conceptually, they're the same. Conceptually, they're the same. Good? So, now, for interacting theory, then, essentially, then the quantum field theory then reduce to doing this path integral, OK? If you know how to do this path integral, and then, essentially, you know how to solve the theory, OK? You know how to solve the theory. And so, as I said, in the free theory case, it's essentially reduced to a Gaussian integral. Then everything is simple, and we will go back to discuss that in a little bit more detail later. And, now, let's just think a little bit how to treat the interacting case. So, recall, our goal is to compute this object-- the vacuum expectation value of time-ordered correlation functions, OK? We want to compute this object, OK? So, now, we will discuss how we use path integral to compute this object, OK? So before doing that, let's first try to do this in-- so before doing that, let's first discuss how to do this in quantum mechanics, before doing this in field theory. And we can just first understand how we compute the similar object in quantum mechanics. And once we understand how to do that, again, the generalization to field theory will be straightforward. OK. Good? So, first, we talk about time-ordered correlation function in quantum mechanics, OK? So we will be introducing a number of tricks, and those tricks will then take over to field theory, OK? They will be carried over to field theory. So, again, let's just consider this system, OK? Let's consider a system like this-- just with the Lagrangian of this form, just one particle theory, OK? And then the analog of this object is this object-- let's call it G n-- would be, say-- so let's call the 0 is the vacuum of the field theory-- of this quantum mechanical system. And then you have time-ordered, and then you have x t 1 because the dynamical variable here for quantum mechanics is just x and x t n at different time and then here, OK? So we want to compute such object in quantum mechanics. OK? So our goal is to develop techniques-- yeah, so you should imagine all these are operators, OK? All these are operators. So our goal is to develop techniques to calculate this. OK, so before doing that, let's first understand how to do this time-ordering using path integral, OK? So before I do that, do you have any questions? OK. So it turns out, actually, path integral is some of the natural, the most natural objects to think about such kind of time-ordered correlation functions, OK? And we will see, in a minute, the path integral is actually the most natural thing to-- the most natural framework to think about this kind of time-ordered correlation functions. So before doing that, let's consider one simple example. So, firstly, consider-- so let's consider some other time, t 1, between t prime and t, OK? So we consider again, we go back to this problem, OK? We go back to this problem. But we can see there's some time in between t prime and t, OK? So, now, let's consider this object. So the simplest case of this one-- you just take n equal to 1, OK? You just have one of them. Let's consider the simplest case. So let's consider this object. So this is easy to do. We can just use the same technique we used before. So, essentially, most of the tricks in quantum mechanics just reduce to one trick. Yeah, I say most of the case, OK? Most of the case, tricks in quantum mechanics reduce to one trick. What is that trick, which we already used over and over, say, in driving the path integral? AUDIENCE: Identities? PROFESSOR: Yeah, insert identities. So if you know how to insert identities at the right location at the right time, and, essentially, you can-- yeah, you know all the tricks in quantum mechanics. And so, here, we do the same thing. So since this, at t 1, so we just insert the complete set of position eigenstate at t 1. So let's just do this. So we integrate x 1, t 1 and the x 1, t 1, x hat, t 1, x prime, t prime, OK? So these are the-- so this integrates to 1, OK? This integrates to 1. So since this is eigenstate at t1, so this act on this will-- you just get your eigenvalue. So we just get d x1, x1. So you just get eigenvalue. And then you had xt, x1 t1, x1 t1, x prime, t prime, OK? But, now, we know how to do both of them. We just plug in our path integral formula to both of them, OK? So you can do this explicitly-- plug in in this expression there, OK? Plug in this expression there. But it's also very intuitive. You know what this looks like, OK? So this is like the following. So, previously, we just do all the paths between these two, OK? So, now, we are-- now you have this x t, and now we have t1. So suppose this is the time, t1. And suppose t goes up, and so this is t prime. So suppose this is the time t1, and this is t at prime, x prime, OK? So this corresponding to you-- you first do the path integral to some location t1-- integrate over all paths here. And then you multiply x1, and then you do all the path integral here, OK? And then you iterate over x1, OK? You integrate x1, OK? So, now, if you-- without this x1-- if we are without x1, then, of course, this is trivially equal to the previous one because you do all possible paths to t1 and all possible paths from t1 to t, and then you iterate over all possible location here. Then it's the same as you integrate from here to there-- the arbitrary path, OK? So the only difference is, now, we multiply by x1 and the value at t1. And, now, we can immediately write down, using path integral, what is this object? So this object is, essentially-- OK, it's just equal to-- so, now, I will use a simplified notation and say, here, write x prime, t prime, x t to be the limit, OK? And then, again, we integrate all possible paths between them. But, then, we just have x t1 here. You integrate over all possible paths between them. Just add, this time, t1. You multiply in the integrand the value of x at t1. Yeah, so this is essentially just the x1 here, OK? Is this clear? You can do this explicitly by plugging in those formulas and then manipulating. You will get this. But it's much easier to understand it heuristically using a diagram. Good? So this is very, very suggestive. It tells you, when we insert an operator here, what we do is just we translate the eigenvalue of this operator, plug it in the integrand. So, now, you can do the same thing. Suppose you have two operators, OK? So, now, let's look at-- you have two operators. OK? So you can almost immediately write down-- if you try to generalize that, what would you write down this, the answer? Yes? AUDIENCE: State the x and t1 and the x and t2 as an integral. PROFESSOR: Good. So you just-- DX(t). But this is almost correct-- not completely correct-- for one reason, which, actually, one of you asked before. Yes? AUDIENCE: Just because you haven't ordered them. PROFESSOR: Exactly. This is only equal to that only for when t1 is greater than t2. So, remember, all these paths-- they don't come back in time, OK? The paths-- they go forward in time. They don't come back. So, here, it's like you have to insert the two-- here, you just insert the two sets of complete states. One is here. One is here. But the path integral cannot come back, OK? So The order here have to be the same as the ordering of the path integral, OK? It has to be the same ordering of the path integral. So that means this is equal only for t1 greater than t2, OK? But the same thing happens but for t2 greater than t1. So suppose t2 is greater than t1. If you start this expression, then we can ask, what the corresponding operator form? So what do you think will be the operator form? Yeah, so for t1 greater than t2, we have this equal to that, OK? But, now, let's ask-- suppose it's opposite-- t2 greater than t1. But, on the right-hand side, we still have this one. But what should be the left-hand side? AUDIENCE: x t2 and x t1. PROFESSOR: Exactly. We just exchange x2 and t1 because path integral always follows the time order. So that means the order here has always to be time-ordered. So that means that the correct formula, which applies for all t1, t2-- it's just you time-order them. You just time-order them. So no matter for what the value of t1 and t2, if you time-order them so that the x which is the larger time always sit in the front, then it's always equal to this one, OK? So this, now, I'll just immediately generalize, OK? So this tells you, because of the path integral-- in the path integral, the time only goes forward. The time only goes forward. Then the time ordering naturally arises in path integrals, OK? So you can immediately generalize this. So for any t-- for any t1, tn between t prime and t, then you always have x t time-ordered, x hat t1, x tn, 0. And this is equal to x prime, t prime, x t, DX(t), and x t1, x tn, exponential i S x t, OK? So that's why we say, earlier, that, using path integral, our goal is to compute this object. So that's why, actually, using path integral to go compute it is very natural because the time-ordering-- it's just very natural from the point of view of path integral. Yes? AUDIENCE: Shouldn't the x be x' t'? PROFESSOR: Oh, sorry, sorry. Yeah, yeah, yeah. Right. Good. Any questions on this? So this is a key formula. OK, so this is the key formula. OK? Good. Any questions on this? So, now, we have got the time-ordering. OK, now, we want to compute. But here is the vacuum correlation function, OK? So we have to actually go to the vacuum. Here, it's in such between the position eigenstate, OK? So, now, we have to see how to do this for the vacuum. So, now, let me just introduce a simplified rotation. So let me call this whole thing X-- capital X, OK? I call this whole thing capital X. So, now, so we want to consider the vacuum correlation function. So the idea is very simple, OK? Right. OK. So we want to consider a vacuum function. We want to interested in the G n for arbitrary t1, tn belongs to minus infinity to plus infinity, OK? So you want to compute this. OK, and the G n, using our notation, we know equal to 0, X, 0, OK? So I denote 0 for that. So now, again, we insert a complete set of states, OK? Well, again, we insert a complete set of states. We insert 1 equal to dx, x t, which is also equal to dx prime, x prime, t prime, x prime, t prime, OK? This t goes to plus infinity, and t prime goes to minus infinity into here, OK? So insert that into here. OK? And then we find-- yeah, so let me-- I need the bigger space, so let me just write it down. So if you do that-- so this trick is general. It does not restrict to the vacuum. So, in any case, evaluating any states, you can reduce it to the path integral by doing this trick of inserting these guys. So then we have Gn, now, just equal to-- now we have limit t goes to plus infinity, t prime goes to minus infinity. Then we have dx, dx prime. So we insert one here and the one here, in both places. Here, we insert the upper line. Here, we insert the lower line, OK? Then we have 0, x, t, then x t, x, x prime, t prime, and then x prime, t prime, then 0-- times. OK? So, now, we just have-- yeah, so let me just save some-- so this then become dx, dx prime. So these two are simple. So, essentially, they just become the vacuum wave function, OK? So we have psi 0 star x and psi 0 x prime. And then we have this path integral, which we already know how to do, OK? Then we have this path integral, OK? Then we have that path integral. So you just insert-- so when we call star, you just insert the star here, OK? And, now, with t and t prime to infinity-- and, yeah, the reason we take t prime and t go to infinity is obvious because we want to include-- so, here, the t has to go between the t prime and the t. So if we want to have arbitrary t-- and then we want the t prime to go to minus infinity and t go to plus infinity. OK? So we have used that-- the x prime, t prime 0 is just equal to psi 0 x prime, just the ground state wave function, OK? So that means that-- so this trick actually works for other states, too, OK? You can do the similar trick. And you just essentially get two more integrals to do. And you take that path integral, and then you just integrate it over the initial and the final wave functions, OK? OK, so the trick applies to any state-- can you see-- not just the vacuum correlation functions. Yes? AUDIENCE: So does this still count as-- because this is still quantum mechanics what we're doing, your ground state-- I mean, this is an energy eigenket, and it's the ground state, or-- PROFESSOR: Yeah. Sorry. Say it again. AUDIENCE: So since this is still quantum mechanics, I'm saying, this 0 here is like an energy eigenket, right? PROFESSOR: Yeah, yeah. Yeah, it's a ground state. It's the it's the lowest-energy eigenstate, yeah. It doesn't matter. You can also take other states, too. It just will be some wave function there. It will be some wave function there. Yeah. Yes? AUDIENCE: So [INAUDIBLE] someone else said, which evolves in time, then moves [INAUDIBLE].. PROFESSOR: Sorry? Oh, yeah. Yeah, if you not take-- yeah, that's a very good question. So if you don't take-- and so if you take other states, and then, in general, there may be some dependence on t prime and t in this wave function. Yeah, indeed, yeah. So you have to take the limit, also, in the wave function. Yeah. But, here, we are using the simplification that the ground state is actually time-independent. Good. Yeah, that's a good question. Other questions? OK, good. So this works, essentially, for any state. But for the ground state, actually, there's another trick which can simplify the problem, which you don't even have to do this two additional integral, OK? You can actually directly-- so, for the ground state, for the vacuum, there's actually another trick to get rid of-- to make the-- these two additional integrals unnecessary, OK? So, now, let me tell you how you do this trick. And this trick is very important, also, in quantum field theory because, if you do a harmonic oscillator, of course, you know the wave function. But if I give you a aharmonic oscillator, which we don't know how to solve the ground state wave function, and then this will be a nightmare because then you don't know the wave function. And for quantum field theory, in particular, in the interacting case, we don't know the wave function, OK? So even though this formally gives you the answer, but, in practice, it's actually often not convenient to use, OK? But, fortunately, for the ground state, there is additional trick which, actually, you don't need to use the ground state wave function at all, OK? So, now, I tell you this additional trick. But this trick is only specific to the vacuum, OK? You cannot apply it to other states, OK? So this is specific to vacuum. So, now, let's forget about this thing, OK? Now, let's forget about this thing. Start coming back from here, OK? So, now, let's coming back from here. Now, let's coming back from here. Just consider-- again, I take limit t goes to plus infinity, and t prime goes to minus infinity. Let's look at this object x t, x, x prime, t prime, OK? OK? So, now, again, we are going to insert identity, OK? Now, we are, again, going to insert the identity. And, now, we insert the identity in a different way. Again, we insert the identity here and insert the identity here. But, now, we insert identity expressed in the complete set of energy eigenstates, OK? So, now, we insert the complete set of energy eigenstates. I just formally label it by m. OK? So, now, let's just insert in the two places, OK? Yeah. OK, so, now, we have two identities. Then we have n and m. We have n and m. So, now, I will-- so you keep this in mind. I will not copy this over and over, OK? So keep this limit in mind-- always, t prime. It goes to minus infinity. So, now, we have-- and then we have x t with m, and m x with n, and n with x prime, t prime, OK? So, now, if we look at this sum-- if n and m equal to 0, this is the object we want, OK? This is the object we want. But this also contains many, many other stuff. So, now, we will use the trick to isolate n equal to 0 and m equal to 0 piece, OK? You isolate that piece. That's what we are going to do now. So this is the commonly-used technique in quantum field theory-- not only in quantum field theory, actually-- in many areas of physics. So, now, let's first-- to explain the trick, let's look at this object. So let's look at this object, OK? So this object is limit t prime goes to minus infinity, some energy eigenstate n and x prime, t prime. So, again, this is written in the Heisenberg picture. So we need to-- so translate to your more familiar language of the Schrodinger picture. So we need to write it as n, and then you have expansion i H t prime and then x prime, OK? So, now, it is the standard Schrodinger picture state. So, now, since n is the energy eigenstate, we can just act this one, OK? So this is just equal to limit t prime goes to minus infinity exponential i En t prime and x prime, OK? So, now, we will try to select the ground state. And we do that by doing the following. So, now, imagine giving En a small imaginary part, OK? OK, imagine-- yeah, take En goes to En 1 minus epsilon. So epsilon is a small positive number, which is equivalent to taking-- or, you just take H, your Hamiltonian, take into 1 minus epsilon, OK? Give your Hamiltonian a slightly imaginary part, OK? It's equivalent, OK? So, now, with this-- and now we have-- so limit t prime goes to minus infinity. So, now, we have exponential i En, 1 minus i epsilon t prime, and n x prime, OK? So this is just equal to that. So, now, you see-- suppose we normalize our state so that-- yeah, so, by definition, En are greater than E0 for n greater than 0, OK? So the effect of putting epsilon here, when you multiply it, it still give you a factor like this-- epsilon En times t prime, OK? So give you a real factor like this. OK? So, now, t prime goes to minus infinity. And the En is greater than E0. So for any state which is not ground state, this factor will be exponentially small-- goes to 0. So the ratio between this factor and the ground state goes to 0 for any excited states, OK? Because of the En minus E0 greater than 0. And this goes to minus infinity, and the epsilon is positive, OK? So this implies-- so we continue. Sorry. It's just-- so let's continue over here. So this means we get, essentially, 0 for n not equal to 0 and the exponential i E0 1 minus epsilon t prime for n equal to 0, OK? Yes? AUDIENCE: If you want to formally show this-- that all energy levels that are not in a ground state get exponentially suppressed, do you divide by the ground state energy into the [INAUDIBLE] energy, or how do you do it? PROFESSOR: Oh, you just take out the overall factor corresponding to the ground state. Yeah, then all the other coefficients will go to 0. Yes? AUDIENCE: What is the justification or why are we allowed to add a small imaginary part to the energy? PROFESSOR: It's a mathematical trick. AUDIENCE: OK. PROFESSOR: At the end of the day, we set epsilon go to 0. And then you go back to your original Hamiltonian. So this is just a trick. If you're doing this order properly, it just naturally selects the ground state for you. Yeah. Yeah, just a pure, mathematical trick. Yes? AUDIENCE: Epsilon has to have a certain sign, right? PROFESSOR: Yeah, epsilon-- when we write this epsilon, we always assume epsilon is greater than 0. So, yeah. You always assume-- yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: Hmm? AUDIENCE: I don't see why you can do that [INAUDIBLE].. PROFESSOR: Oh, I can-- I have a path integral. And, essentially, I can design the rule I want, as far as, at the end of the day, I get the desired quantity. Yeah. But the order of limit is very important. You have to be very careful. Yeah. Yes? AUDIENCE: Can you get around this with some sort of a stationary phase or saddle point expansion? Because, in the end, you're going to be integrating exponential factors. And so the term that dominates is-- PROFESSOR: No, here, it's not-- no, here, you cannot do a stationary phase because if you just-- yeah, yeah. You really need to suppress the other contribution-- not only suppress them. Just, they have to go to 0. Yeah-- genuinely go to 0. OK, so, similarly, with the same trick, just the same thing, in one shot, it selects n equal to 0. But you see, actually, it also selects m equal to 0 because, you see, with the same sign, you can easily check yourself when t goes to plus infinity. So these now become limit t go to plus infinity, say, exponential minus i En, 1 minus epsilon t times x n. And then, again, this become 0 for m not equal to 0. And then the ground state-- for n equal to-- for m equal to 0, OK? So this simple trick beautifully selects the m equal to 0 and n equal to 0 piece, OK? So, now, we can write-- so this is just the ground state wave function. And this is just the ground state wave function, OK? So, now, we have limit t prime goes to minus infinity, and t goes to plus infinity, x t, x, x prime, t prime. So, now, you go to-- so you have the ground state wave function star, x prime, x, now 0, x, 0. And then you have this phase factor. And with everything else, goes to 0, OK? So we get that, OK? And so this is the-- so, now, after those factors, we get what we want, OK? We get what we want. So, but, still, you have lots of lowering factors. But they can be easily got off. They can be easily get rid of using the following. So, now, you also-- similarly, the exact same reason, whether-- so in this discussion, whether you have x there, or what's the form of x? Doesn't matter. x can be anything. So x can be anything. So you can just take x equal to 1, OK? And then you also have limit t goes to plus infinity. t prime goes to minus infinity. The x t, x prime, t prime-- so this is also equal to psi 0 x, psi 0 star, x prime. And, now, it's the 0 with 0 because the you've set x equal to 1, OK? And then you just ground state. And so this is just equal to 1. And then you have the same factor, OK? So, now, we can just find G n, OK? So this object is Gn. So Gn, then, can be defined as the ratio between the two. We take the limit-- t plus infinity. t prime goes to minus infinity. Have x t, x prime, t prime, and x t, and x, x t, OK? You can just find that as a ratio between the two. So this gives you the expression for calculating this correlation function. And the x is any time-ordered product, OK? And so, now, for each of them, we know the path integral. And then you can just write down the path integral for each. So, now, you see this G n is, in principle, a physical observable that's actually involving the ratio of two path integrals-- one without any x and one with some x, with your desired correlation functions. So I mentioned before that when you evaluate the path integrals, you will get that kind C determinant or those things. They just cancel between upstairs and downstairs. Later, we will see explicitly, OK? Yeah, so this is a beautiful formula, OK? Yeah. And then you don't have to-- yeah. Yeah, so all those factors canceled. You don't need to know the ground state wave function. You can just perform that path integral. And here, in principle, you can also take arbitrary x and x prime, OK? You can take arbitrary x and x prime. It doesn't matter. So, conventionally, we just take x equal to x prime equal to 0, just for simplicity, OK? We can just take them to 0. Good. Any questions on this? Yes. AUDIENCE: Why can you specify both points in your [INAUDIBLE]? PROFESSOR: Yeah, I just-- the only thing matters is there-- so they only come into this prefactor, and then cancelled between the two. It doesn't matter. So all the factors there and these factors, they just cancel. And you're just left with this x0 left, so it doesn't matter. They cancel anyway, so it doesn't matter where you put x and x prime. Yes? AUDIENCE: So we didn't actually take the limit as epsilon moves to 0, or anything, so it doesn't actually matter. PROFESSOR: So, normally, when you calculate that at the end, and then you take epsilon goes to 0. So we just, normally, leave that implicit. Yeah, when you do the path integral, you keep a small epsilon. So after you finish, in the end, you take the epsilon goes to 0. Yes. AUDIENCE: Sorry. Also, on why would you take x and x prime equals 0-- doesn't our path integral-- the bounds-- depend on x and x prime that you choose? PROFESSOR: No, the bound does not depend on it. The path integral is only limited by t prime and t. Yeah, initial value, they are the same-- of course, specific value of this path integral will depend on x and x prime. But they cancel between upstairs and downstairs. So they cancelled. Yeah. And so, in the end, this right-hand side does not depend on x and x prime. Yeah, also, you see here, if this formula makes any sense, the left-hand side does not depend on x and x prime. So the right-hand side must not depend on x and x prime, otherwise, this formula immediately wrong. Yes? AUDIENCE: [INAUDIBLE] have to keep the small epsilon back in [INAUDIBLE]. So, again, we take limit as epsilon goes to 0. PROFESSOR: Yeah, that's right. That's right. Yeah, just when you do the calculation, you keep a small epsilon. When you do the path integral, you keep a small epsilon. But when you finally calculate the final answer, you can set epsilon to 0. AUDIENCE: Do we do that before we take the t to infinity, also, or does it matter? PROFESSOR: No, you have to keep an epsilon. This, you have to take first. So this is part of your calculation. You have to always take this first because the final answer also does not depend on t and t prime. And because this is the vacuum, this correlation function does not depend on your t and t prime. So you always set the-- yeah, you keep a small epsilon, take this limit first, and do the calculation. Do the calculation. And then when you calculate everything, at the end of the day, then you set epsilon to 0. And then that guarantees you chose the vacuum. Good. There's one more trick we have to do, OK? Sorry. There are too many tricks. There's one more trick we have to do. So, now, we have told you how to calculate this guy for arbitrary x. But, often, it's actually not-- it's not the smartest way to directly calculate such an n-point function, OK? It's often not the smartest way. So there's something a little bit more clever. It's called generating functional. So some of you may have seen this, say, in your high school math competition. So suppose we want to do an integral like this, OK? So let's do-- again, let's go back to our ordinary integral. So let's just suppose we want to do integral like this. So do you have a parallel? Let's just say-- say this. We have some phase, like what you do in the path integral. And we want to calculate the x to the power m OK? n is some integer. Yeah, here, like you have an n-point function-- suppose we want to calculate this. So you can just calculate this guy. It's fine, OK? But there's a better way. Often, there's a better way. Instead of calculating this for individual n, you can actually consider the following quantity, Z a, which is defined to be dx exponential i lambda fx, then plus i-- say ax, OK? So you consider this object. so this object, when you expand this exponential-- yeah, so maybe I will-- I think I don't need this anymore. Yeah, let me just erase this. So, now, if you-- so why we are interested in this object? Because imagine you expand this exponential factor, and then, essentially, you get Z a is exponential n equal to 0 to infinity, n factorial i a n, Zn, So? If you expand this thing in power series of a, a power series-- if you expand this and then just n-th term will give you an x power n. And then it's the same in that we just expand this Z a in power series, and then the Z n would be the n-th coefficient, OK? So if you compute Z a in one shot, and then the easy Z n, you can just do by a Taylor series. OK. So, sometimes, we also write Zn as, say, i to the power n. Then you take n-th derivative of Za over a, and then you set a equal to 0, OK? So this is the same thing. So when you take n derivatives of a, you get rid of the earlier-- the lower powers. And then the higher powers you get rid of by setting a equal to 0, OK? And then let's just pick this term, pick Zn term. So Za is called the generating function. OK? Generating function. Oh, OK. So, now, when we want to-- so one final remark. So, now, we want to compute this guy. So this Gn is equal to t, x1 to xn, so x t1 and x tn. Again, instead of computing each one of them, we can consider such a so-called generating function or generation of this idea. We can consider Z J t. We can consider this object-- DX(t) exponential i S, x t, and, now, add i dt from minus infinity to plus infinity, J t, x t, OK? So, remember, this is just the path integral. So let's just look at the upstairs, OK? Exponential x, exponential i S x(t). So, now, this starts the same thing. So, now, if you take this, imagine you expand this factor and then to n-th power. Then you will generate this term as a coefficient, OK? We will discuss that in more detail next time, OK? So, often, instead of computing this object directly, we compute a generating functional, so-called generating functional, which often makes things much easier, OK? Yeah, so with this preparation, and then we are ready to tackle how to treat this thing-- how to actually find such object in quantum mechanics. And once we have that, doing field theory will be automatic. And then you will, then, know how to actually calculate the scattering amplitude, which is a good achievement. OK, so let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_17_Chiral_and_Majorana_Spinors.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: Yeah, so, today, we are going to start a new topic. OK? So, first, we talk about chiral fermions. So remember, say, under Lorentz transformation, lambda, the Dirac spinor fields transform as S lambda psi x. And the x prime is a Lorentz transformation of x. So x prime is lambda act on x, OK? And then the S is given by, say, omega mu nu, sigma, mu, nu, OK? And the sigma mu, nu is driven by the commutator of the gamma matrices. So let me just write it down. So sigma mu, nu-- just remind you-- i divided by 4. OK. So one natural question-- so, previously, when we derived the Dirac equation, we showed that the Dirac equation requires, actually, psi to have four components, OK? But, then, we showed that the Dirac equation is covariant if the psi transform this way. So natural question is that whether you can actually restrict to a smaller part-- say, to a subset of psi-- whether they still have well-defined Lorentz transformation, whether we actually need to have four components to have well-defined-- four complex component to have well-defined Lorentz transformation, OK? And the answer turns out to be, no, you actually don't need to have four complex components to have well-defined Lorentz transformation. Actually, you can reduce it, OK? And so there are two ways to reduce it, and one is called chiral fermion, and the one is called the Majorana fermion, OK? So one is called the Majorana fermion. So we first talk about the chiral fermion and one way to do it. So, for this purpose, we will look at the specific representation of gamma matrices, OK? Consider-- so now I will use a representation which is different from what you-- so we consider the following one. Gamma 0 equal to 0, i, i, 0. Gamma i equal to 0 minus i sigma i, i sigma i. So let's look at this choice of gamma matrices, OK? So I will call this choice to be star, OK? And so you can also work it out. You find the sigma. So sigma 0, i-- so this is the sigma corresponding to the boost. So you find the sigma i, you can just do the commutator. From the sigma i is equal to minus i divided by 2, have this block diagonal form. Sigma i 0, 0 sigma i-- again, the small sigma i is always the Pauli matrices. And then you can also work out the sigma ij. Then you find the is given by minus 1/2, epsilon ijk, then sigma k, 0, 0, sigma k, OK? Oh, sorry. Here, there's a minus sign. OK, here, there's a minus sign. OK, so you find that they have the following form. So what do you observe about this? So do you see something? Yes? AUDIENCE: They're block diagonal. HONG LIU: Yes, they are block diagonal. So if sigma is block diagonal, then that means this S is also block diagonal, OK? So when S is block diagonal, what does that mean? Yes? AUDIENCE: [INAUDIBLE] HONG LIU: Hmm? AUDIENCE: [INAUDIBLE] HONG LIU: Exactly. So when the S is block diagonal, that means, when I write psi-- so, psi-- have four components. That means that the upper two component and the lower two components-- they don't transform to each other, OK? They only transform within themselves. They don't transform to each other, OK? So if I choose, that means this lambda is block diagonal. So that means, if I write psi x into two component vector, psi L and psi R, so I denote the upper two components by psi L and lower two components by psi R-- are two component complex vector. That means, under Lorentz transformation, under S lambda, psi L and psi R do not mix. So they just transform among themselves. They just transform among themselves. So they actually have well-defined Lorentz transformation as a smaller unit, OK? You don't need four components to be able to transform under Lorentz transformation. Actually, at least two components can already transform. OK. So this tells you, in a sense that the Lorentz covariance only requires two component spinors. Just by Lorentz transformation itself, you don't actually need four components. OK. So, now, I'm going to tell you, we actually knew this all along. So how did we know this all along? Yes. AUDIENCE: Why is there no [INAUDIBLE]?? HONG LIU: Sorry. Say it again. AUDIENCE: Didn't we have to go to four components in there because there are no representation [INAUDIBLE]?? HONG LIU: Yeah, yeah. We have to go to four components because there's no two component representation of the gamma matrices. But that doesn't say it's a Lorentz transformation. Yeah. To write down the algebra for the gamma matrices, you need four components. But we actually knew all along that two components is enough for Lorentz transformation. How do we actually know all along? AUDIENCE: Lorentz assumption isn't just [INAUDIBLE] spin like difference between the [INAUDIBLE]?? HONG LIU: Yeah, this is maybe more complicated. We have something much simpler. Yes. AUDIENCE: Is it the massless particle? HONG LIU: Yeah, exactly. So we already said before, if you have a massless case when m equal to 0, and the Dirac equation reduces to the two components, and it's enough to do two components, and the Dirac equation is covariant, OK? And so that means that, actually, you should be able to do it with two components, OK? Because the massless particle-- they should be able to transform on the Lorentz transformation. And so, yeah. So we already saw this because the massless. So the hint from before is that the massless only require two components. Since the massless case must also be Lorentz covariant so that-- and Lorentz symmetry itself should only require two components. So it is not a Lorentz symmetry, actually. Lorentz covariance requires Dirac theory to have four component. It is the mass, OK? So it is the mass-- mass m. If you want to describe a massive particle, then you must have four components, OK? So it is the mass which is the key. OK, good. Any questions on this? So, now, we have shown in this particular representation of gamma matrices, for this particular choice of gamma matrices, and then the psi transform block diagonally. But then we also said, but, now, consider a different choice of gamma matrices. And then this property will not hold, OK? This property will not hold. Now, the question is that, does this-- even with other gamma matrices, can we actually reduce psi to some smaller components? The answer still should still be yes because we said that all representation of gamma matrices should be equivalent. So if we can do it in this choice of gamma matrices, then we should be able to do it in any choice of gamma matrices, OK? So, now, let me tell you how to do it for the general gamma matrices. So this property that you can reduce to two components should exist for all choice of gamma matrices. Just for other choice of gamma matrices, to separate the psi into psi L and psi R is more subtle. You no longer-- is just simple the upper two component or lower two component. So we have to do a little bit of work, OK? Actually, we don't need to do much work if you actually find the right trick, OK? And so the beautiful trick to do this for any choice of gamma matrices is that you can introduce the following object-- what is called gamma 5. So gamma 5 is defined to be i gamma 0, gamma 1, gamma 2, gamma 3, OK? So you take the product of all the gamma matrices together and then with a factor of i, OK? So the i there is for the purpose that if you-- you can check yourself-- that the gamma 5 is actually Hermitian. So the i is there for this purpose, OK? You need i for this to be true. You also can check yourself that gamma 5 squared is equal to 1, OK? So, this, you can almost easily understand because it's all gamma 0, gamma 1, gamma 3. So you multiply itself again because any same gamma matrix, they multiply either 1 or minus 1. So you multiply them together, in the end, they can be either 1 or minus 1. Just turns out, for this choice of i, it's 1, OK? So yeah. And then you can also check the gamma 5 anticommute with any gamma matrices. So mu here is, of course, from 0 to 3. And so this is of-- you can see immediately from here-- you can immediately from here-- because gamma matrices whose indices are not the same, they anticommute, OK? So if you try to commute this with any gamma matrices, you have three of them. Yeah, because this runs over all gamma matrices. So the one in which-- yeah, so if you take with some gamma mu, and that particular one, which is the same as gamma mu, of course, commutes with gamma mu. But then you have three others. But three others will give you minus sign, OK? And you can also check yourself. Gamma 5 actually have 0 trace, OK? So, this, I will leave as an exercise for yourself, what you can do is you did before with other-- yeah, in your homework-- yeah, similar to the exercise you have done in your homework. So, now, from this properties-- now we can say the following things about the gamma 5 matrix. First, because gamma 5 squared, squared to 1. And, also, this is Hermitian. So it is Hermitian means its eigenvalue is all real, OK? So its eigenvalues are all real, and gamma 5 squared equal to 1-- that means its eigenvalue is either plus or minus 1, OK? So have eigenvalues plus, minus 1. And then from the property that this is traceless, they tell you the number of the eigenvalues, which is plus 1, and the number of minus 1. They should be the same. Otherwise, they won't cancel. It won't be traceless. And so each eigenspace is two-dimensional. OK? So you have four eigenvalues. So there's 2 plus 1, 2 minus 1. It must be. So since you have eigenvalues 2 plus 1, 2 minus 1, and then we can introduce a projector to project into the eigenspace, say, with plus 1-- with eigenvalue plus 1 or the eigenvalue minus 1, OK? So I can introduce a projector which, for historical reasons, is called PL. It's defined to be 1/2 1 plus gamma 5. And the PR is 1/2 1 minus gamma 5, OK? So this will project into eigenspace with the eigenvalue plus 1 squared. And this will project into an eigenspace with a minus 1, OK? Because when 1 minus 1 plus-- yeah, anyway. So you can check, OK? So you can check they are really projectors. So PL squared equal to PR squared equal to 1 and PLPR equal to 0 and PL plus PR equal to identity, OK? And then-- OK. So, now, I introduce-- now I can project-- define the projection of psi L of the projection of PL psi-- project to the left space, OK? And psi R to be the projection to the other space. OK. I define them this way. And then you can easily see, by definition, you can easily convince yourself gamma 5 acting on psi then just equal to psi L. And the gamma 5 psi R is equal to minus psi R, OK? So they project into the eigenvalues of plus, minus 1. Yes? AUDIENCE: Wait, why is PL squared equal to PR squared plus 1? HONG LIU: Oh, sorry. Sorry, sorry, sorry. No, no. This is good. This is completely wrong, OK? This is completely wrong. I was dreaming. So PL squared equal to PL, PR squared. Sorry. Yeah, thank you. Yeah, so you can check their projectors, OK? So, indeed, you see-- so, from here, from this definition, you can check this is true, OK? This is a one-second check. And so, indeed, they project to the eigenspace of gamma 5 or plus, minus 1. OK. So then, by definition-- OK, so now this psi L, psi R, which is now defined for any choice of gamma matrices-- so, again, they have to two independent complex component, OK? And so they call the chiral spinors-- sometimes also called Weyl spinors. And so this is the analog of psi L and psi R here for the general choice of gamma matrices. So, now, we will check this actually, indeed. So now the claim is that psi L and psi R defined this way will transform under themselves under the Lorentz transformation. They will not mix with each other, OK? So, again, psi L and psi R here each have four components, OK? They just have only two independent compact components. So they still have four complex components, OK? And there's still four component spinors because just there's only two independent ones. There's only two independent ones. OK, so, now, it's easy to check they actually transform among themselves. So you can check the gamma 5 actually commutes with sigma mu, nu. OK. So this is very easy to see. So, from here, gamma 5 commutes with any gamma mu or anticommute with any gamma mu. And the sigma mu, nu is just the sum of two-- the product of two gamma mus-- have even gamma mus. So the gamma 5 will compete with them, OK? So gamma 5 will commute with them. So if gamma 5 commutes with sigma mu, nu, then gamma 5 commutes with S lambda because S lambda just generated by sigma mu, nu. And then that means-- so we commute with S. That means, under transformation, S will not-- under transformation by S will not change the eigenvalues of gamma 5, OK? So that means that psi L prime, S lambda, psi L, and gamma 5 acting on psi L prime is still gamma L. So it's still within the same space. And, similarly, we say-- OK, so that tells you that psi L and psi R-- they transform separately because the gamma 5 commutes with Lorentz transformation. And so each eigenspace, they transform separately from each other. OK. Good. Any questions on this? So you can also find in the-- so in the chiral representation, you can check in this star-- so the star-- this particular choice of gamma is called the chiral representation, OK? Because in that choice of gamma-- things simplify, we just have upper two components and lower two components. So you can check yourself, just by working it out, that the gamma 5 indeed just have block diagonal form-- 1, 0, 0 minus 1, OK? So that's why. In that case, it's very simple, OK? But in other representation, gamma 5 can be more complicated. Good. Any questions on this? Yes, you have a question? OK. Yes? AUDIENCE: Another way to do that, as you said, would be to try to find the unitary matrix that shows-- under which this arbitrary representation is equivalent to the chiral representation. From this argument, can we figure out what that unitary [INAUDIBLE] looks like? HONG LIU: Yeah, yeah. Yeah, you can. Yeah, yeah. No it's not a unitary transformation, just a similar transformation, yeah. So each of them are related by a similar transformation. And, indeed, the gamma 5-- gamma 5 in the other representation are related to this one just by a similar transformation, too. Yeah, so I will use that language when I talk about Majorana spinor. So, in this case, it's sufficiently simple. I don't need to use that language. Yeah, but you can use that language. OK, so let's go back to this chiral representation and write the Dirac equation into this chiral representation. Yeah, one second. So, now, if you write the Dirac equation in terms of psi L and psi R-- so, remember, the Dirac equation have the following form of Dirac Lagrangian density. OK? And then since psi is just equal to the sum of the-- so psi-- psi R, OK? And then you can just write this in terms of psi L and psi R. Write this in terms psi L and psi R. And then you find that the cross term vanish. You can also check this explicitly in the chiral representation, but the expression I'm writing down is general, OK? So you can write it as psi dagger partial sub 0, plus i sigma i, partial i, psi L. Yeah, sorry. Yeah. Yeah, actually, psi R dagger. Yeah, let me first write. I think I said something wrong. Yeah, OK. OK? So, yeah, as I said, so this expression only applies to the chiral representation, OK? So in the chiral representation of gamma star-- in this space of star, and then we have two components. Then I can write this psi and the psi L and psi R into two components. And then, yeah. So this is just ordinary sigma matrices, OK? And so this is the expression you get, OK? So what you notice-- is that for m equal to 0-- so there's no coupling between psi L and psi R, OK? So it's the mass term which coupled them together, OK? Kinetic term-- psi L-- there's no cross term between the psi L and psi R, OK? And this behavior is actually general. You can write it in arbitrary representations. But, of course, in arbitrary representation, I can no longer use this sigma i, OK? And so in this particular form-- even though this feature is general, but this particular form of the kinetic term only applies for the chiral representation. OK. So for m equal to 0, you don't have coupling between the psi L and psi R. And then you only have to say, oh, diagonal term and psi R term. And then that gives you something else, OK? So, again, this is reduced to our previous statement that if you have a massless case, you can describe using a two-component spinor, OK? So here, indeed. But, here, there's also something extra. So what do you see-- something extra here? Yeah. Did somebody raise your-- yes. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. AUDIENCE: Why are we now taking the psi equals psi L plus psi R instead of before we had it as psi L and Psi R HONG LIU: Right, right. Yeah, yeah. Sorry. A good question. Yeah, this expression is wrong. OK. Somehow, I was doing a-- I was trying to-- yeah. I remembered I wrote this in the general basis, but then I realized I only wrote it in that basis. Yeah, in the general basis, I would have psi equal to psi L plus psi R. Yeah. But then I realized, I only write the kinetic term in this specific basis. Good. Yeah. Yeah, so that expression does not apply for the chiral basis but apply for the general. So here, actually, something profound happens because, when m equal to 0, when you don't have coupling between psi L and psi R, you actually get the extra symmetry, OK? So in the Dirac Lagrangian, as we discussed earlier, so we have a U1 symmetry. Psi goes to exponential i alpha psi, OK? So this Dirac Lagrangian is invariant under that because the psi is complex. But, now, psi L and psi R-- they are separate. So, now, I can actually transform psi L and psi R separately, OK? So under this transformation, psi L and psi R transform the same, OK? But, now, m equal to 0-- I can have psi L goes to exponential i alpha L, psi L. Psi R goes to exponential i alpha R, psi R because they only couple to themselves, OK? And, now, I have this symmetry, OK? And so, now-- so, here, you have U1, and now you have U1 times U1 called U1 times L and U1R. So these are called chiral symmetries because they transform the left and the right separately. OK. Yes? AUDIENCE: I'm a little bit confused why you can't write psi as the sum of the two projections? HONG LIU: Sorry? AUDIENCE: Why you can't write psi as the sum of the two projections? HONG LIU: No. No, I can write it-- no, I can write it that way. Just, now, I'm using the two-component form. When I write two-component form, then I write psi that way, it doesn't make sense because psi L is the upper two component, and psi R is the lower two component. Yeah. Yeah, yeah. I'm using the same notation for this spaces. And so in these spaces, psi L and psi R, they only have two components. But in the general case, there have four components, OK? So in the general case, I can write psi equal to psi L and psi R. But in these spaces, I cannot. Yeah, using this notation, I cannot. So, now, you have a new symmetry equal to-- which you can transform them separately, OK? And the symmetries are one of the most important aspect of physics, and they have very important implications, et cetera. And the chiral symmetry actually has also very important effect in particle physics-- for example, the pions. The pions has to do with-- I will not go into detail. The pions-- they essentially come from the chiral symmetries. Without the chiral symmetries, there's no pion. There's no pions, OK? And actually understanding how the pions come from the chiral symmetries, et cetera-- there was a Nobel Prize to Nambu a number of years ago, which our colleague Goldstone also made a very important contribution to that. And, also, nature-- also interesting about this chiral symmetry is that they are there. Say, if you have a classical massless-- say, if you have a massless Lagrangian, then you can have the symmetry in the classical level, in the Lagrangian level. But once you quantize this theory, you find the symmetry goes away. It becomes anomalous. It could become anomalous. Symmetry is only present in the classical level but not in the quantum level. And, again, that plays a very important role in particle physics, actually. OK. Yeah, the bottom line is that the chiral symmetry is very important in many aspects of physics. It's also important in many condensed matter systems, like liquid helium, et cetera. So you can also write this symmetry for general gamma. So for general gamma, you can have your previous symmetry. So, now, I'm using the four-component notation, which psi L and psi R transform the same way. And then, now, you have a new symmetry. Gamma 5-- OK. And now you can put the gamma 5 in the exponents. Good. Any questions on this? Yes. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah, yeah. Alpha tilde is just some other constant-- AUDIENCE: Oh, alpha tilde. HONG LIU: If tilde is just some other constant, then you multiply gamma 5. So the way to understand that these two are related-- so think about the transformation here. So, here, we can rewrite a little bit differently. We can consider rewrite this alpha L and alpha R in terms of the following. Let's consider the two transformation-- one transformation, psi L, and psi R transform the same. And the other transformation is that they transform oppositely. They transform in the opposite phase, OK? So I write the alpha-- and this writing is like that, OK? So, in this way, psi L and psi R transform the same, but-- psi L and psi R, they transform opposite because they have opposite eigenvalue on the gamma 5. And so this is equivalent to that. Good. Any questions on this? Yes. AUDIENCE: Why is it called gamma 5 rather than, say, gamma 4? HONG LIU: I think, again, it's a historical reason. So people often like to go to Euclidean space. So when you go to Euclidean space, you continue gamma 0 to gamma 4. Yeah, just-- yeah, to gamma 4. And then you reserve gamma 4 for that. Then the gamma 5 is the next one you take. Other questions? Yes. AUDIENCE: Is there a physical reason why the massless case is special? Is it because it becomes scale-free, or something like that? HONG LIU: Yeah, yeah. So massless case-- it's always special. And you will see, in physics, actually, the massless case actually gives you very much richer structure, normally, than the massive particle. Mathematically, it's because the massless case-- in the massless case, the representation of the Lorentz group is very different from the massive case. For example, if you have a vector field, say, for Maxwell field, for the photon, massless photon have two polarizations. But if you have a massive vector-- say, for the photon is massive-- then we'll have three polarizations. And so the massless case and the massive case are very, very different. The fermion case is the same. So if you have a massive fermion, you have four complex components. But if you have a massless case, then you have two complex components. AUDIENCE: How about very small mass? Doesn't that cause a problem if you go from x [INAUDIBLE]?? HONG LIU: Yeah, yeah. That's a very interesting question. And there are a lot of subtleties associated with that kind of questions, indeed. Yes? AUDIENCE: Is there any sense in which we can treat psi L and psi R as different dynamical fields, like a complex scalar field the case phi and phi star? HONG LIU: Oh, psi L and psi R certainly are independent-- you can certainly treat them as independent dynamical, yeah. Other questions? OK. So let's conclude our discussion of the chiral spinors. And now we can talk about the Majorana spinors. OK, so the so the Dirac spinor, which we have talked about so far, is four components. So you have four times two real components, OK? And then the chiral spinor we talked about-- essentially, you have two complex components. It's, say, 2 times 2 real components, OK? 2 times 2 real components. So the next one I'm going to talk about is the Majorana, in which case, I would argue, we have 4 times 1 real component. So it has four independent real components, OK? Yes? AUDIENCE: Sorry going back real quick. So I'm assuming R and L are for right-handed and left-handed in this [INAUDIBLE]. HONG LIU: Yeah, yeah, yeah, yeah. AUDIENCE: So can you ever have a spinor where-- what does it mean to have one of [INAUDIBLE] non-zero? So both of them are non-zero, I guess. What is that-- because, in my mind, it is either right-handed or right handed. What is the mixture of-- HONG LIU: No, no, no, no. Physically, electron contains both left and right. Yeah, so for massive particle, because they always couple together-- so the left hand, so psi L turning into psi R, psi R turning to psi L all the time. And for massive particle, you cannot really separate them. But, now, for massless particle, then psi L and psi R-- yeah, this is a very good question-- psi L and psi R are preserved. So massless particle is either psi L or psi R. And then if you look at their so-called helicity, it's either left-handed or right-handed. So that's where the name psi L and psi R come from. But, for the massive particle, you cannot make this separation. But for the massless case, then it's preserved. And then it's generally left-handed or right-handed. OK, good. And, now, let's talk about the last case-- this case, which is-- you have four real components, OK? So what do you do? Again, we follow the similar strategy to see whether it's possible to have four real components. Again, we first-- the idea is that you first try to find the special representation of gamma matrices so that a spinor that can be real, OK? And then you try to generalize to any representation of gamma matrices. OK. So, now, if you look at the Dirac equation-- now, look at the Dirac equation. So if I want to find the real spinor, then I ask myself whether real spinor is compatible with this equation, OK? So if the real scalar is not compatible with this equation-- because I take the complex conjugate, I then take a gamma mu star. If psi is real, then psi also have to satisfy this equation, OK? But, in general, gamma mu is complex, as we wrote before-- yeah, I just erased it. For example, in these spaces, it's complex. With complex, then these two equations are not compatible, and then psi cannot be real, OK? Simple as that. But there's a way out. The way out is that, if they exist, the representation of gamma mu, the question is whether there exists a representation of gamma mu so that gamma mu is real. So if this is real, and then these two equations become the same, and then for psi being real is compatible with the Dirac equation, OK? It's compatible with the Dirac equation. So, now, let's-- so then this becomes a question of trial and error, OK? So you try to find the representation of gamma matrices so that it's real OK. And then it turns out, you can find it, and here is the answer. So let me just write down the answer. I don't know how he originally found it-- Majorana originally found it, but here is the answer. So this is four gamma matrices. And you can see there, each of them is real because sigma 2 is pure imaginary, and sigma 1, sigma 3 are real. And so this is purely real. And you can check. This satisfied the algebra of the government matrices. They satisfy the algebra of gamma matrices. They anticommute with each other, and each of them square-- so each of them squaring 1. So these three square them into 1. This square equal minus 1. So, now, this is compatible with Dirac equation, but this is actually not enough. We also have to be sure this is compatible with the Lorentz transformation, OK? So, now, let's check whether this is compatible with Lorentz transformation. So, now, we have this sigma mu-- gamma mu. And, remember, the sigma mu, nu-- I just erased it. So now this is pure imaginary because if gamma mu and gamma nu are real, their commutator is also real. And then sigma mu, nu will be pure imaginary. And then that means S lambda-- so this is now purely real. Now, this means this is real. And, now, we are done, OK? That means that if we take the psi is real, then after Lorentz transformation, this remains to be real, OK? And so that means that it's compatible with Lorentz transformation. So if we were not compatible with Lorentz transmission, then we were finished, OK? So this shows that this remains real. So such a spinor is called a Majorana spinor, OK? So it has four real components. Yeah, we wrote there-- it has four real components. So you can quantize it, which, I think I will give it as an exercise for you to do, OK? You can quantize it. And then, in this case, then the fermions are its own antiparticle, rather than for the Dirac spinor. You have particle and antiparticle, so this is the analog of the real scalar in the spinor case. So this was discovered by Majorana in 1937, OK? And he was very young. At the time, he was 31. And, yeah, a brilliant physicist, complete genius. And then, in 1938-- so he lived in Sicily, OK? So his hometown was in Sicily. So he boarded a ship from Naples to Sicily. And then he just disappeared on the ship, never seen again. At the age of 32, he just disappeared. Yeah, it's a quite-- yeah, extremely brilliant physicist. Yeah. And there are all kinds of stories about his disappearance-- that he may be killed by Mafia or maybe suicide, et cetera. But just nobody knows. Yes? AUDIENCE: You said that all the choices of gamma matrices were equivalent. HONG LIU: Yeah. AUDIENCE: This doesn't really feel equivalent. It is still equivalent to the other representations? HONG LIU: Yeah, yeah. We will talk about that. So now we have chosen a very specific representation for gamma matrices, which psi can be chosen to be real, OK? But how about for the general representation? So now we talk about the general. Majorana spinor-- yeah. Yeah, also, let me just make a remark. Majorana spinor, of course, also plays a very important role in modern-day physics. Say, for example, people suspect a neutrino could be a Majorana spinner, OK? So to check whether a neutrino is a Majorana spinor-- yeah, it's a forefront experimental program-- has been pursued by many years. And, also, in condensed matter, in quantum information, and Majorana spinor play a very important role. And so, in condensed matter, you only have electron. So electron and the Majorana spinor is, like, half electron, OK? Because the electron have eight components, right? Remember. And Majorana only have four components. So Majorana is, like, half electron. So precisely because it's heuristically half electron, it has very stable topological properties, which a single electron does not have. And whether you can engineer in your condensed matter systems, Majorana spinor then became a Holy Grail. Because if you can do it, and then you can do lots of-- yeah, you can achieve more stable quantum computation, et cetera. Yeah. During the last number of years, there have been various experimental reports. People say they have engineered Majorana spinor in the lab, which I think has never been fully confirmed, I think. None of them has been fully confirmed. Anyway, so, yeah, so, now, let's talk about Majorana spinor in general basis, for general gamma mu. So the idea would be similar to the case of the chiral spinor. For the chiral spinor-- in the chiral basis, it's very simple, just upper and lower components. So, for the chiral spinor, you have to introduce some other structure to isolate psi L and psi R. So you have to-- now, you have this nontrivial condition, OK? And the chiral fermions come from this nontrivial condition. So, now, the key is that, how do you find the analogous condition to the psi to be real in the general basis, OK? Because when gamma mu is generally complex, clearly, you cannot set the psi equal to psi star, OK? That does not make sense. You have to find another equivalent equation to do, essentially, the same thing, OK? So that's the basic idea. OK, so for this purpose, we want to-- now, we use any gamma matrices that are equivalent to each other up to a similar transformation, OK? So let's denote-- this basis by gamma m, OK? And then we have gamma m, which is this called Majorana basis. And then any choice of gamma mu then related to gamma m by a similar transformation-- that means there exists some matrix C that the C can take any gamma mu into gamma mu m, OK? So there must exist C, and this equation is satisfied. Good. So, now, given this C, then we can easily write down the condition for the general basis because under such a change of basis, the spinor in the Majorana basis, which is real, is related to the spinor in the gamma mu basis by this transformation C, OK? So the C relates to the gamma matrices. But the C, of course, also relates to the spinor. It's just a change of basis, OK? And so psi M will be related to psi by C. And now, since the psi M is equal to psi M star, so that means that C star psi star should be equal to C psi, OK? So that means the psi star should be equal to B psi with B equal to C minus 1 star C, OK? So that should be the condition which you impose in the general basis. OK, so that should be the condition you should impose in the general basis. So you have to introduce this C. So if you find that the transformation between the general gamma mu and the gamma mu m, and then you can use that to find the B. And once you find the B, and then you can find the-- you can impose the-- yeah, so this is called a Majorana condition in the general basis. So let's understand a little bit. So, actually, we can understand the B more directly, OK? So, here, we expanded from C. But we actually can find the B more directly. We can just take the complex conjugate of this equation because gamma mu m is real. So we can also take the complex conjugate of this equation. So that means that, since gamma mu m equal to gamma mu m star, but if we take complex conjugate of that equation, means that the C star gamma mu, C minus 1 star is equal to C gamma mu, C minus 1. And now, again, you just-- sorry. This would be star, OK? So, now, if we put all the C to this side, and then we find gamma mu star is just equal to B gamma mu, B minus 1, OK? So you just put this to this side. Then you just-- so this becomes B, and then this becomes B minus 1, OK? So, now, we see that B-- this actually makes sense. This is actually the matrix we take gamma mu to gamma mu star, OK? So B is the matrix to take star, OK? Good. Any questions on this? Yes? AUDIENCE: [INAUDIBLE] this is [INAUDIBLE].. What group to they belong to? HONG LIU: Well, they're just general, nonsingular matrices. Yeah. Yeah, just 4-by-4 nonsingular matrices. They often can be chosen to be unitary. But, in principle, you don't have to choose them to be unitary. OK, so, now, let's double check. So let's call this equation star star. So let's check that star star-- so we show that, here, in this representation, this is compatible with Lorentz transformation, OK? So we still need to check star, star is compatible with Lorentz transformation. So what do we mean by this is compatible with Lorentz transformation? We mean that, if we take a psi which satisfies this condition-- take a psi which satisfies star star, and then we make a Lorentz transformation psi prime equal to S lambda psi, and then psi prime should also satisfy that condition-- star star, OK? So that means this is compatible with Lorentz transformation. It means that the psi prime star should be equal to the same as B psi prime, OK? So it means that psi prime star should satisfy that equation. So, now, let's check that. Now, let's check this. So from here-- so before checking that, do you have any questions on this? OK. Good. So, first, from this equation-- let's call this star, star, star. From this star cube equation, so we can find the B when you act on sigma mu B minus 1. So that gives you minus sigma mu, nu, star, OK? So this is obvious because sigma mu, nu has i there. So the minus sign comes from the i. And, otherwise, the B takes each gamma matrices there into star, OK? And then that means that the S star, lambda, which is given by exponential 1/2 omega mu, nu, sigma star-- sigma mu, nu star, now this is equal to-- yeah, you can just plug this in. It just becomes exponential minus i omega mu, nu, B sigma B minus 1, OK? So let me just-- yeah-- B sigma mu, nu, B minus 1, OK? So I just inserted the sigma mu, nu, star. It's equal to minus that here, OK? So, now, you can see that this B minus 1 is in the exponential. You can immediately take it down. So that is just equal to B S lambda B minus 1, OK? So because when you expand this in power series, B and B minus 1 always cancel, except the first one and the last one. So we have used this trick many times. Yeah, and then so we get a very nice relation that, under Lorentz transformation-- so the Lorentz transformation matrix on the complex conjugation, again, is generated by this B, related by this B matrix. And, now, it's just immediate, OK? And now just immediate, so when you have the psi prime equal to that, let's just take the star of this equation, OK? So psi prime star just equal to S star, lambda psi star, OK? So that is equal to B S lambda B minus 1 and B S-- B psi, OK? So this is equal to B S lambda psi, OK? So this is equal to B psi prime-- precisely what we were trying to show. Good. Any questions on this? OK, so, now, let me give you an explicit example of this matrix B in the-- so for this Majorana representation, the B is just equal to identity, OK? So B is just equal to identity in this representation. And, now, let's try to give you an example of the B in the other representation. So suppose, in the chiral representation, which I wrote down before-- so, yeah, I should not have erased it. Yeah, anyway. So in the chiral representation I wrote down before, so if you stare at that expression, you find that gamma 0, gamma 1, and gamma 2-- or gamma 0, gamma 1, and gamma 3 are imaginary, pure imaginary. And the gamma 2 is real. Gamma 2 is real, OK? So this pure imaginary means, when you take the star of them, you get the minus sign. So this one, you take the star of them, you just get back to itself. So, now, if we look at this equation, so if this is pure imaginary, you get the star. You get the minus itself. And then, essentially, you get the minus self means the B actually anticommutes with gamma mu, OK? Because you can just bring B minus 1 to this side, so it just becomes gamma mu star B equal to B gamma mu. So if this is minus gamma mu, that means B should anticommute with gamma mu, OK? But if gamma is real, that means B will commute with mu, OK? If mu is real, then that means B should commute with mu. So, now, in this chiral basis, these three are pure imaginary. That means B needs to anticommute with them. But this is real. It means B needs to commute with this guy. Then what is B? AUDIENCE: Gamma 2. HONG LIU: Hmm? AUDIENCE: Gamma 2. HONG LIU: Exactly. So B, in this case, can only be gamma 2, OK? And then we can work out, what is the Majorana condition in this basis-- so, essentially, this condition. So that means that the psi star should be equal to gamma 2 psi, OK? So that's the Majorana condition here. So, now, remember, in the chiral basis, we can write psi in terms of the psi L and psi R. So, essentially, we have this condition. So I erased my gamma 2. So I saw the gamma 2-- let me write it here explicitly. It's minus i gamma 2, 0 minus i sigma 2 and sigma 2, here, OK? So, now, if you look at this condition, this means that no longer psi L and psi R are no longer independent of each other. So psi L star should be equal to minus i sigma 2 psi R or, equivalently, psi R equal to i sigma 2 psi L-- or L star, OK? So, in this case, the psi, then, have the following form-- psi L i sigma 2 psi L. So sigma R just can be expressed in terms of sigma L. So this is the Majorana spinor in the chiral basis, OK? You see, there are only four independent, real components because each psi L is two complex components, OK? Yes? AUDIENCE: So why do you [INAUDIBLE] that psi and psi star are not independent [INAUDIBLE]?? HONG LIU: Hmm? AUDIENCE: Why are they-- HONG LIU: No, no, no. No, here, we are imposing this condition, right? We are imposing this condition. Yeah. Yeah, this is Majorana condition we want to impose in this basis. AUDIENCE: And this is now independent of massless or massive particles? HONG LIU: Yeah, yeah, yeah. Yeah, this is-- yeah. Good? So this concludes our discussion of the Majorana spinor. Do you have any questions on this? Yes? AUDIENCE: So is the orthogonal component of psi-- the Majorana fermion [INAUDIBLE]?? HONG LIU: Sorry? AUDIENCE: The orthogonal component of this Majorana species-- like, possible-- in the chiral one, like, psi L, and then you [INAUDIBLE] and then psi R. [INAUDIBLE] two components of psi. HONG LIU: Yeah. AUDIENCE: So we are-- we have one component of psi, and there should be another one, right? So is that one? HONG LIU: Sorry. I don't quite understand your question. Say it again-- what? AUDIENCE: OK, [INAUDIBLE]. HONG LIU: OK, OK. Other questions? Yes? AUDIENCE: So, I guess, related to an earlier question, how do we consider handedness here for a massless particle that's also the Majorana, which psi L and psi R are [INAUDIBLE]? HONG LIU: Right. So for the-- yeah, if we-- for massless particle, you can just direct it because then you don't have to think about the psi R, and this, you just have the same number degrees of freedom as a massless particle. Other questions? Good. OK. OK, good. So let's now go to the next topic. We only have a few minutes, so we can only just make some general comments. So so far, mostly, we have been talking about continuous symmetries. But there are also discrete symmetries, OK? So, by definition, discrete symmetries are symmetries, which don't have continuous parameters, OK? So continuous symmetries are symmetries, which-- yeah, which the transformation dependence on continuous parameter. Discrete symmetries, you just don't, OK? You don't have continuous parameter. So simple example-- say, let's imagine we have-- so this real scalar theory-- we can see that before. So this theory has a discrete symmetry because this is invariant under phi. It goes to minus phi, OK? So because you see all the terms are even, so it's invariant under phi go to minus phi. And this transformation is no continuous parameter, OK? So this is a discrete symmetry. And so this is-- if you do it twice, you go back to itself. So this is often called the Z2 symmetry. OK, so this is called the Z2 symmetry. And there are also spacetime discrete symmetries, OK? So this is an internal discrete symmetry-- have nothing to do with spacetime, OK? There are also spacetime discrete symmetries. So spacetime discrete symmetries including, say, if we consider Minkowski spacetime-- so you can have t goes to minus t. So you can have so-called time reversal, which corresponding to your t, x goes to minus t, x, OK? You just transform the time. You can also have the so-called parity. You take t. Then you revert all the spatial direction, OK? So comment that you can ask why we actually reverse all three directions. How about if I just reverse one direction or reverse two directions, OK? That seems also to be a discrete symmetry. And indeed. So if you just change the directions, say, in the x direction, that's also a discrete symmetry. And if you only change the direction in both x and the y direction, that's also a discrete symmetry. But if you change-- if you do the reflection in two directions, that's equivalent to a 90-degree rotation-- a 180-degree rotation in that plane, OK? And so it's part of the continuous symmetries, so it's not independent discrete symmetry. And now, when you change all three directions compared to change one direction, you differ only by changing two directions. So that means changing all three directions and changing one direction-- they differ by 180-degree rotation, OK? So that means that when you change-- this is the only independent discrete symmetry from the spatial reflection point of view. OK. So for a complex scalar field-- so if you consider complex scalar field-- if you can see the complex scalar field, then this is no longer a discrete symmetry because, remember, we can rotate phi by a phase. When you rotate the phi by a phase, if you take that phase to be pi-- say, exponential i pi-- and then you take to be minus phi. And so, in that case, this is part of the continuous symmetry, so it's no longer independent discrete symmetry. But, here, there's, nevertheless, another discrete symmetry. Can you see what is the other independent discrete symmetry here? Yes? Good. You can take phi to phi star, OK? You can exchange phi to phi star, OK? It's a complex conjugation. And this is often called charge conjugation. OK. This is often called charge conjugation because, remember, heuristically, we can think of phi as create-- yeah, it's just one of them create the particle, and the other create the antiparticle. And they have opposite charge, OK? So it's called a charge conjugation. So, this, will give a symbol called T-- script T. And, this, we give a symbol called P-- script P. And, this, we give a symbol called script C. So, altogether, they are called CPT symmetry, OK? Yeah. Yeah, let's stop here. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_14_Lorentz_Covariance_of_the_Dirac_Equation.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So last time, we talked about Dirac equation. So the Dirac equation has the following form: gamma mu partial mu minus m psi equal to 0. So here, we have two spaces. So one is your standard physical spacetime, so with x mu. And then we have a lot of internal space which is labeled by the index of the psi and the gamma, which has suppressed here. So psi should be considered as a four vector. So alpha equal to 1, 2, 3, 4, and the gamma mu should be considered as a matrix-- a 4x4 matrix. And so this space labeled by alpha-- so this is called the spinor space. This is called spinor space. So you have to be careful that now we have two spaces intertwined together. One is your ordinary spacetime, and so this is a function of the spacetime, but it's carried on index-- yeah, carries on indices. And so this is the matrix equation. So altogether, there are four equations here for each component of psi. And yeah, so let me pull-- let me just write this a little better. So this is x-- let me call this equation 1, which I will use later. And this gamma mu, they are not ordinary matrices. They satisfy this condition. Gamma mu, the anticommutator with nu equal to 2 eta mu nu. So that means different gamma matrices when they are indexed are not the same. They anticommute with each other. So if you pass them through, you get a minus sign. So they anticommute with each other because when mu not equal to nu, the right-hand side is 0. And when you pass through each other, you get the minus. And the square of them-- so gamma 0 square, you get minus 1 because this is 0, 0. It's minus 1. And the gamma i squared, you just get one. So this is very simple. Yes? STUDENT: When you say gamma i squared is 1, you mean like its the identity matrix? PROFESSOR: Yeah, its identity matrix. Yeah, when I say gamma 0 square is minus 1, it's also minus times identity matrix. Yeah, exactly. And also not all gamma mu are Hermitian So the relation is that the gamma mu dagger is equal to gamma 0 gamma mu gamma 0. So again, from here, you can-- yeah, so this is just a compact way to write its properties. When you have-- when mu is equal to 0-- and then you have gamma 0 times gamma 0 squared is minus 1. And then you get the minus gamma 0-- tells you that the gamma 0 is anti-Hermitian. And if you take the index to be i here, and then you have index i here-- and then when 0 and i-- they are not the same, so they anticommute. You can pass through this gamma 0 through this gamma i. You get the minus sign, and then gamma 0 square gives you another minus sign, so you get 1. So that means that the gamma i dagger equal to gamma i. So gamma i is hermitian. Yes? STUDENT: So is there supposed to be an i in front of the derivative or do we absorb the i into the gamma? PROFESSOR: So in my convention, there is no i. So with this-- so eta mu mu here is mostly plus metric. So some people, when they use, say, mostly negative metric, then there may be an i in there. Yeah, so it depends on your convention in defining the gamma matrices. STUDENT: So for minus plus plus plus, no i-- OK, sorry. PROFESSOR: Yeah. Good. Other questions? Yeah, as I mentioned last time, different conventions are annoying, so-- but you just stick one convention-- should be fine. Stick to my convention-- would be fine. Yes? STUDENT: Sorry, how can we get the expression for gamma mu dagger? PROFESSOR: Oh, yeah, this is just a compact way to write down that gamma 0 is anti-Hermitian and gamma i is Hermitian. Yeah, just-- this just compact. Yeah, just this is useful so that you can treat all gamma mu in the same way so that you don't have to always separate them. Good. Other questions? So last time, we also said you have many, many choices for gamma mu, but they are all physically equivalent. And so different choices, they are convenient for different purposes. So often we just pick one of them, and depend on the problem we are-- yeah, so that comes with a little bit experience. So for certain problems, certain choices of gamma become more useful-- become more easier to manipulate. Good. So now, let's talk about the Lorentz covariance of the Dirac equation, which we started a little bit last time. So Lorentz covariance means that the equation looks the same in all Lorentz frames. So when you make a Lorentz transformation-- so imagine you make a-- go to another frame. You start with your frame defined by this x mu. So now, imagine you go to another frame which you make a Lorentz transformation. So the Lorentz covariance means your equation when written in this frame in terms of x prime mu have the same form. So last time, we talked about the scalar equations at the end. And another equation we have seen before is the Maxwell equation, so let me also very quickly mention for the Maxwell equation. So when you have a Maxwell equation, which is the story for the vector potential-- so now, in contrast-- remember for the scalar case-- so last time-- let me just write it down here. Last time we said for the scalar case, when you go to a different frame, the phi prime should be equal to-- evaluate the new coordinate should be equal to phi x. So this is the transformation for scalar field. But now, if you want to do Maxwell equation-- so we have to think about how this a mu transform under Lorentz transformation. And now a mu is a vector-- is a four vector in spacetime. And so that means that the a mu should transform also as the vector. So this means that when you go to a new frame, A mu transforms as lambda mu nu A nu x. So here, not only-- so when A mu evaluate at new position-- it can be written as a linear superposition of the value at your original position. And this linear superposition by itself is a Lorentz transformation because the A mu is a vector. And so that means just under Lorentz transformation, the different components of A mu should also change under the transformation. So given this-- so I will not show it now. Just will take a couple of minutes, but you should convince yourself-- try to check yourself that the Maxwell equation, which can be written-- so for example, as F mu-- mu nu equal to 0. This implies that the partial mu prime in the new frame, and then the f mu mu prime. f mu mu prime is obtained from this new A mu prime, and these two equations are the same-- are equivalent. Just in one frame, and then when you go to a different frame, then you get this equation. So the equation has exactly the same form, but now it's in the new frame. Good? So now, let's go to Dirac equation. So now, go to Dirac equation. So in the Dirac equation, again, this psi-- we need to ask how the psi should transform under Lorentz transformation. But now, this psi is a completely new object. So this is not like in the Maxwell case. We know this is a spacetime vector, so you can easily guess how you should transform. So now this psi-- this spinor space is completely new. So now, we have to figure out how psi transforms. So let's suppose under a Lorentz transformation lambda that psi transforms as follows. Psi prime alpha x prime-- because now, again, you evaluate in the new position-- new frame. Since this carries index, in principle, this can be a superposition. You can mix them in the internal space. In principle, we can have form like this. Say for some matrix S, which depends on lambda. So in principle, you can have a transformation like this. Say when you evaluate new position, it's given by the value at the old position, but now you can make some rotations in the internal space just as here for the Maxwell field. So except-- and so suppose this and then the Lorentz covariance would be the statement, say, for some matrix. And then the Lorentz covariance then is a statement starting from equation 1-- you should be able to find going to the new frame-- you have an equation like this-- gamma mu partial mu prime minus m psi prime x prime equal to 0. So you should get an identical equation-- not identical equation 2. You should get an identical equation, but now everything is in terms of prime. So I want to emphasize that gamma mu does not change despite a carry index mu. Mu gamma mu just some constant matrices-- just the-- it's just four matrices. It does not transform under any spacetime trans-- it's not dynamic variable. It does not transform on the spacetime transformation, so gamma mu does not change. Good. So now the question of the Lorentz covariance-- where the Dirac equation is Lorentz covariance boils down whether we can find such S. Whether-- yeah, the question is, can we find, such an S? So if we can find such an S, which this is true, then we say the Dirac equation is Lorentz covariant. And this is how the Dirac field should transform. So this psi is normally called the Dirac field, and this is how a Dirac field should transform. So psi is often called Dirac field. It's also often called spinor field. And it's also often called Dirac spinor field. And anyway, and then that should be the way how it's transformed. Yes? STUDENT: Yeah, I was just confused, because before, for the scalar fields, when we get transformations, we would either say-- like, transform the fields, like phi goes to phi prime or transform the coordinates, like x goes to x prime. So why did we transform both in this case? PROFESSOR: Yeah. No, no, no, in this case-- so when you talk about transform of the scalar field itself, of course, it's just phi equal to phi prime and-- because that function changes. But here, we are asking about the form of the equation in the new frame. And so when you go to the new frame, your function form changes, but your coordinate also changes. So that's why you need to-- when we say the covariance, you need to evaluate your new function in the new position. Yeah, that means they look the same in different frames. Yes? STUDENT: So should I think of S as the representation of lambda in the spinor space? PROFESSOR: Yeah, exactly. Good. That's exactly the right mathematical language to talk about this, which we'll mention later. At the moment, I didn't want to use mathematical language. Yeah, for those people who are familiar with group theory, indeed. So this S would be the representation of the Lorentz group in the spinor space. Good. So now, let's try to find this S. So before we do that, let's first write what is a partial prime x mu. So recall, partial prime mu, it should be partial, partial x prime mu. This transform the new coordinate. And from that transformation, you can easily figure it out-- just chain rule. So this is the same equal to partial, partial x mu and the lambda minus 1 nu mu. So this is very easy to figure it out because they have to be-- x will multiply the x. Partial x will have to be a singlet, so this transform as a inverse matrix. So this implies that the gamma mu partial mu prime should be equal to lambda minus 1 nu mu gamma mu partial nu. Let me just write it better. so with this preparation, then we can try to see whether we can find the S from this equation. No-- yeah, find this equation from 1. So what we can do is let's multiply the equation 1 from the left by S. So let's imagine we multiply S-- such a matrix. Again, I always just suppress the spinor indices and just write in the matrix form. So let's just write gamma mu. So take that equation 1, we multiply it by s. So we have two terms here. So this is a matrix in the same space as S. S act in the same space as gamma mu, so we cannot easily commute here-- normally, they don't commute. But this-- but m is a constant, so we can pass S through here. So we can rewrite this equation as follows. We can write it as S gamma mu S minus 1 partial mu minus m S psi 0. So for m, I just pass S through, but for this term, I just inserted S minus 1 and S-- and so yeah, it's the same. So now, we can use this equation in here, because S times psi just equal to psi prime x prime. So now, we get S gamma mu S minus 1 partial mu minus m psi prime x prime equal to 0. We just use that equation. Yes? STUDENT: I'm a little confused how you can pass S through partial mu because given that partial mu acts on psi and it has to have a representation in this inner space [INAUDIBLE]. PROFESSOR: Yeah, so that's the key I emphasized earlier. So x mu and psi alpha, they are in different space. And so all this S and gamma, they are all constant. They don't depend on spatial location. And so partial mu does not act on S. Yeah, for partial mu point of view, S is just a constant. It's a spacetime constant. It's only rotate psi at a single point-- the different component of psi at the single point. Good? So now, we have this equation. So now we can compare this object with this object. Yeah, in particular, this object is equal to that. So we conclude that 1 goes to 2 if S gamma mu S minus 1 equal to this object lambda minus 1 nu mu. I think-- yeah, I think I'm messing up a little bit notation here. Gamma mu-- I think my index is a little bit wrong. Let me just make sure. Oh, yeah, here is a partial nu. Sorry, here is partial nu, so I need to exchange the mu and the nu index here. So mu and the nu, then gamma. So this has to be equal to that. So this has to be equal to that, and you need to compare partial nu with partial nu-- you have to exchange and nu and the nu here anyway. So this is the equation we have to satisfy. So we have to find a matrix S which acts on gamma mu. Gamma mu is some bunch of matrices. And gives you like this. So as if the gamma mu actually-- when you act on Sophia as if gamma mu transforms as some Lorentz transformation-- yeah, inverse Lorentz transformation. So we need to-- now let's try to find this S. So to do this, again, we use a trick which we have been using before. So how would you approach this problem? STUDENT: The identity. PROFESSOR: Good. Yes, just do infinitesimal transformations. So once you learn how to do infinitesimal transformations, then you always know how to do finite ones. And infinitesimal ones, it's much, much easier to do. So now, again, we consider Lambda mu nu close to your identity. So that means we write lambda mu nu as delta mu nu, which is the identity. Then plus omega mu nu, and take the omega to be small. And also remember previously, we discussed that omega mu nu, when you know the index, is actually antisymmetric. And this is infinitesimal, so we take this to be infinitesimal and work everything to first order in omega. So similarly, the lambda minus mu nu just equal to delta mu nu just [INAUDIBLE] minus omega mu nu. So yeah, just to leading order in omega expansion-- the inverse metric is just given by that. So now, we can try to-- so since the S-- so on the right-hand side, when lambda is close to the identity-- so the right-hand side is just the gamma mu. Just the identity does not do anything. Just gamma mu, and then plus something proportional to omega. Then, that means that S must also-- when lambda is close to identity, it means S must also have the structure to be identity and proportional to something linear in omega. So the S must also have this structure. Yeah, let me just don't write the index. Just directly write as identity. And then it should be something proportional to omega mu nu. So from convention, we write this way. So it should be linear in omega mu nu and sigma mu nu. So sigma mu nu are a bunch of matrices. So remember, S is a 4x4 matrix, and so omega mu nu is just some number. So this will be a bunch of 4x4 matrices. So for each omega can in principle independently multiply some matrix. So this is the most general way to expand this linear order in omega. So this sigma essentially just the first derivative of x with respect to each omega mu nu. And this i over 2 is just convention. And similarly, the inverse is just corresponding when you change the sign here to leading order in omega. Yeah, so emphasize each sigma mu nu is a matrix. It should be understood as some matrix in the spinor space. So now, you just need to plug in this equation. Plug in this S and S minus 1 into this equation. Yeah, just expand the both sides to the omega, and you equate the coefficients. You equate the coefficients. From that way, you determine the sigma. So let me call this equation star. Yeah, I'll just call this equation 3. So then to order omega mu nu to linear order. Then, the equation 3, if-- when you expand on both sides to omega-- to order of omega and equate both sides, then you find the following equation. This is just a couple of lines algebra, so I urge you to do it yourself. So given by i commutator lambda rho, gamma mu equal to eta lambda mu gamma rho minus rho mu gamma lambda. So you get this equation. So the left-hand side is very easy to understand. So essentially, you just-- whenever you have a commutator-- for people have done this Baker-Hausdorff et cetera, the first order, you always get the commutator. So that's where this commutator come from. So the right-hand side, when you expand this, essentially, you just get omega mu nu because we have to lower the indices. So you have some eta here. You have eta here, and yeah, so that's how you get the right-hand side. And the reason you get two terms-- because it should be antisymmetric in the-- yeah. Good. So now, it just boils down to solve this equation. So if we can find sigma-- satisfy this equation, and then we are done. So yeah, of course, this is-- now, you do it by trial and error. And the bottom line is that there's a solution. So let me just write down the solution. And the nice thing about other people found the solution is that you can just check it. So you can check this quantity-- this solves the equation. So just can plug this. So this is the commutator of gamma lamda and gamma rho. You plug into that, and then you just evaluate this gamma matrices and use this kind of equation over and over, and you will find this is satisfied. Again, I will leave it as an exercise for yourself. Yes? STUDENT: I have a question about this board. So you start off with saying that partial mu is transformed and that's how you get this lambda inverse. Then, the last thing is now actually-- this gamma's transformed and it's not partial mu transformed. I'm a bit confused how it-- it seems like you're linking a transformation in partial mu with a transformation in-- PROFESSOR: No. So here, we want to match 1 and 2. So the step is we do-- well, have each equation-- from equation 1, we reach here. From equation 2, we just plug this into there. And so each equation has done one step, and then I equate them. STUDENT: Right, but I guess what I'm asking is equation 1 has partial mu transformed, and then that's how you get your lambda inverse. And then in equation 2, you are transforming your gamma. PROFESSOR: No, we want to show this-- you want to match this equation with this equation. This equation is derived from 1. So we want to derive-- we want to find the 2 from 1. This is equation 1. So I just slightly rewrite the equation 2 by inserting this transformation here, then I matched them. Other questions? OK, good. So now, we can just-- now we can just immediately-- so given this equation, and now we can immediately write the final transformation. So with the finite transformation-- so for each lambda mu nu, each finite-- you can obtain the corresponding omega mu nu, and then this is also finite. And now, we can just obtain the S by exponentiating this. And then the corresponding S would be S equal to exponential minus i over 2 omega mu nu sigma. And the sigma mu nu is just given by this one. And you can check yourself. This satisfies-- you can check this satisfies equation 3. Yeah, that's finite equation. Good. Any questions on this? OK, good. So now, let me-- was there some-- so now, let's make some remarks. So then this S, it just generates Lorentz transformation in spinor space because it only act on this alpha and beta. So sigma i j-- or sigma mu nu according to our standard terminology, this is called the generator of the transformation. So these are the generators. So when omega mu nu-- when the mu nu equal to spatial directions, and then that's corresponding to a spacetime-- spatial rotation and the 0 i corresponding to a boost. So remember. So that means for sigma ij, which is i equal to 4, gamma i, gamma j-- this generates generators of rotations. So remember, previously-- so if you remember how we do the Lorentz transformation, the omega ij would correspond to the rotational angle in the ij plane. So you just rotate in the ij plane by angle omega ij. So then this just corresponding now-- corresponding to the generator of the rotations in ij plane. So you can-- because the gamma i and the gamma j are Hermitian, recall that the gamma i is Hermitian. And then the sigma ij is also Hermitian because when you take the dagger of this i gives you a minus sign, but the commutator gives you a minus sign. and so this is Hermitian. So that means that S, which corresponding to rotation, is unitary. So this is unitary. So this is a unitary matrix. So now let's consider the sigma 0 i, which corresponding to the generator for boost. So this will have the form gamma 0 gamma i, so this is the generators for boost. In i-th direction in the spinor space. And now, because the-- remember gamma 0, when you take the dagger, you get the minus sign. So the sigma 0 i, now if you take the dagger, you actually also get the minus sign because the-- yeah, so now it's anti-Hermitian. So that means the boost matrices-- so this is a boost transformation-- so this is not unitary. So this is not unitary. So in general-- so normally, as we said before, normally, when you do a symmetry transformation, the transformation is a unitary transformation. But in this case, actually-- yeah, this is a classical transformation. Here, it's actually not a unitary matrix. So this implies that S dagger, in general-- for general Lorentz transformation, say, which including both rotation boost-- say S and S dagger is not equal to 1. S dagger-- S is not equal to 1. So this has very important consequences, for example, for writing down the action for the Dirac equation. So so far, we only wrote down the Dirac equation, but we did-- remember, previously, we normally started with the action first, and then from the action, we derive the equation of motion. But in this case, since this spinor is a completely new concept, we started with actually the equation. But now, if we want to write down an action-- which is by definition, it should be Lorentz invariant-- then we should construct quantities which are invariant under Lorentz transformations. And so this property then becomes a key. Yes? STUDENT: So like, we're-- so these are operators on the spinor space, and we're talking about Hermiticity and stuff. But Hermitian is with respect to an inner product, and we haven't talked-- PROFESSOR: No, here, we are not talking about quantum mechanics. Here, we're just talking about the equations-- classical equations. We are just talking about-- they're matrix-- they're just matrix. We are talking about whether they are unitary matrix or they are not unitary matrix. They are just ordinary matrices-- 4x4 matrices. Other questions? So that means psi dagger psi-- so psi dagger is a row vector, and this is a column vector, and altogether this is a number. So this means this is not a scalar. So this transform under Lorentz transformation as psi dagger S dagger S psi. So that means this is not-- since this is not equal to 1, then this is not scalar under the Lorentz transformation. So now, we have to search a little bit harder to find the scalar. So in order to write the action, we need to find something which is invariant under the Lorentz transformation. So the easiest thing to think about is this quantity, because this automatically gives you a number. But this thing won't work, so we have to search it a bit harder. So to do that, let's-- we can get some hint from the following identity. So let's look at what this S dagger really is. So let's look at the property of S dagger. So recall that the-- I think I erased it, so let me just write it again. So gamma mu dagger is equal to gamma 0 gamma mu gamma 0. So now, let's try to find what mu nu dagger. So let's write mu nu dagger in the uniform-- so even though we wrote it separately. So you can easily check yourself because of this property. So this is just given by minus gamma 0 sigma mu nu gamma 0. Because this is just a commutator, so you can easily work it out that you just get this. So now we can find what S dagger. So S dagger is equal to the exponential 1/2 i omega mu nu, then sigma mu nu dagger. And then this is equal to i over 2 omega mu nu minus gamma 0 sigma mu nu gamma 0. So now, if you remember, gamma 0 squared is equal to minus 1. So whenever you have such a situation-- and because gamma 0 squared is equal to minus 1, then you can actually take the gamma 0 outside of the exponential. So this is actually equal to minus gamma 0 exponential i over q comma 0 mu nu sigma mu nu. So if you just think-- you do a Taylor expansion of this exponential, and then when you take the power of this, then for each term, the gamma 0 at the end will pair with a gamma 0 at the beginning of the other term, and then they give you minus 1. And then you have only the first gamma 0 and the last gamma 0 left. And because that gives a minus 1, that changes this minus sign to a plus sign. So you just do a Taylor expansion you will find here. And now, we find the nice relation. So we find that this is just minus gamma 0, and this is just equal to S minus 1 S gamma 0. So we find that the S dagger is actually minus gamma 0 S minus 1. Then, this tells us from this property-- tells us that this quantity psi dagger gamma 0 psi should transform as a scalar. So now, let's take a look. So this, when you do a transformation-- so this gives you the psi dagger S dagger. Then, you have gamma 0, then you have S psi under Lorentz transformation. And now, you plug this into-- plug in S dagger equal to this into here. You just get minus psi dagger, so this is gamma 0 S dagger S minus 1 gamma 0, then gamma 0 S psi. So this gives you minus 1-- cancels with the 1 here. And the S one cancels with-- S minus 1 cancels with S, and then you have dagger goes here. So this actually is Lorentz invariant. So now, we find a nice Lorentz invariant quantity. So since we use this all the time, it's convenient to introduce a new notation. So now, I introduce -- convenient to introduce recall psi bar equal to psi dagger. So here, it's all just clustered. This is dagger, and it's all just matrix manipulation. Right now, we're considering a classical theory, and so it's convenient to introduce objects like this. And then we know that psi bar-- psi is a scalar. So that thing just becomes psi bar and psi. So it's convenient to work out the-- how psi bar transforms by itself. So you just use this relation so you can check yourself. So as an exercise for you to check yourself under Lorentz transformation psi bar x is equal to psi bar prime x prime. So if you go to the new psi bar prime x prime is equal to psi bar x S minus 1. So you can easily check yourself this relation. Similarly, using this transformation of gamma partial mu, you can also check that the gamma mu partial mu so prime psi prime x prime equal to S gamma mu partial mu psi. So you can also check yourself-- check this equation. So yeah, just by using these properties of S dagger. Yeah, just-- yeah, this, you don't use the-- you only use for above the S dagger. Just use how partial mu prime transform and how psi transform, and then you can show this is true. Good. So this-- also this gamma mu partial mu will appear a lot because this appears in the Dirac equation. So it's convenient to introduce a new notation. Say gamma mu partial mu we define to be partial slash. So essentially, anything slash corresponding to that thing contract with gamma mu. And then this equation then can be written as the partial prime slash psi prime x prime equal to S partial slash psi x. Good? So let me just mention one more thing you can work out yourself. So you can check-- so these are all things you can check yourself once you equip those transformations. You can also check that gamma psi bar gamma mu psi transforms as a vector. That means psi bar prime x prime gamma mu psi prime x prime is equal to lambda mu nu psi bar x gamma nu psi x. So if you view this whole thing as a vector, and then you see the prime-- the quantity is equal to just a Lorentz transformation lambda on itself. So again, this is something just based on the transformations of S and the relations between different gamma matrices you can just show that. Good. Any questions on this? So now, with these preparations, we can now write down the actions which gives rise to the Dirac equation. So now, we can write down the Dirac action. The action which gives rise to the Dirac equation. So I will just write down the answer. It's very intuitive. So you have just essentially S equal to minus i d4 x psi bar gamma mu partial mu minus m psi. So this is the answer. Yeah, so there are various things to uncompact here. So first, just based on those relations, based on this is a scalar, this is a scalar, and the transformation of this, you can immediately see that this is a Lorentz scalar because it only involves two variables-- one quantities-- psi bar, psi with this m term, and then psi bar gamma mu partial mu psi. And that we already show here-- that the transform as S. So yeah, you can easily check just based on those equations that this is a scalar. And this is Lorentz invariant. So the second thing is that this i here is for-- to make the action is real, because you can show if you take the complex conjugate of this, you actually get the minus sign, and so you need the i to make it real. And this minus sign, which cannot be explained now, which we will talk about it later when we quantize the theory and we see actually, we need to put the minus sign here. Yeah? STUDENT: And the definition of psi bar has psi dagger gamma 0. It feels like we're singling out the time dimension and the definition, because gamma 0-- is that true that we're singling out the time dimension? PROFESSOR: We are not really singling out the time direction. just due to the property that gamma 0 is not a Hermitian. Yeah, just because if you look at all this complicate-- just related to that this S dagger is not-- yeah, S is not unitary. And the reason S is not unitary is because gamma 0 is not a hermitian. Yeah, so you have to put in the gamma 0 in various places to compensate for that. Other questions. Yes? STUDENT: So I guess the action, if you wanted a Lorentz invariant-- a Lorentz scalar, you could have constructed some dot product of that object that transforms as a vector, right? So the psi bar gamma-- so that would be allowed in principle. I guess this gives the right answer, but that could've appeared in the action. PROFESSOR: Which-- yeah, this appeared in action. Yeah, so this is the-- so because this is a vector-- so when this contracts with partial mu, that gives you a scalar. Yeah, so that gives you a scalar. So yeah, you can understand that this is-- in both ways, this transform-- the fact this transform vector is also related to here. I mean, just-- yeah. Other questions? So-- yes? STUDENT: To follow up on that, if I contract that with itself, would it give like another [INAUDIBLE].. PROFESSOR: What? STUDENT: if I contract that [INAUDIBLE] with itself. PROFESSOR: Yeah, but then you will have four psi's. So four psi's will give you rise to an interacting theory. Yeah, that's right. Here, I'm writing down a free theory right now. So you can also see that this equation gives rise to the Dirac equation. Just imagine psi bar is independent of psi, because this corresponding to psi dagger. So if you do the variation of this one, you just automatically get the Dirac equation. Then you can also easily check by integration by part. And if you were psi, you get the complex conjugate version of the Dirac equation acting on psi bar. So now let me just say a few words on why you need the i here. So you can check. So here, I'm listing some relations. Again, each of them, you need to really write down the paper yourself, stare at it, maybe do a derivation to get intuition about yourself. Right now, I'm just writing down those relations. So you can also check yourself that when you take the psi bar psi dagger, again, just go through all these things. You find it's equal to minus psi bar psi. So this is just two lines here. I will not write them explicitly for you. And similarly, you can check related to this-- the other term that the gamma psi bar gamma mu partial mu psi-- if you take the dagger-- so here-- so this one is slightly more complicated, but you can still-- you can just walk it through. You find it's just equal to negative of itself, then plus total derivatives. So this one is not exactly minus sign, but you have to throw away some total derivatives. So total derivatives which give you-- when you plug into the action, it gives rise to boundary terms, but it will always assume the field vanish at infinity. So that explains why you need this i, because when you take the complex conjugate of the quantity here, you always get the minus sign. Good. Any questions? Good. So now, it's getting a little bit awkward because our printer today broke. So I didn't bring enough of my notes, so now I have to look at my computer to remember my notes. One second. Good. Ugh. I have to find the location. OK, good. So any questions on this? No? OK, so now let's move to the next topic. So now, this concludes our discussion of the Dirac equation. So we have derived the Dirac equation by-- and also we have discussed how various quantities in Dirac equation should transform, and also finally the Dirac action. So the next step-- the logical next step is to quantize to go to the quantum field theory. So right now, so far, everything is classical. So now, the next step is that we want to quantize this theory. And when we quantize the theory, we will see remarkably fermions. So we will see fermions. We will see Pauli principle. But before-- but as we said before, that when we quantize the theory, we first need to-- the simplest way is to first find this all is classical solutions, and then the classical solutions then become solutions to the operator equations, and then we can automatically quantize. So then before, we actually do the quantization. It's better we try to find all the classical solutions of Dirac equation. And so it's not like a Klein-Gordon equation-- we can just immediately write down the solutions. And the Dirac equation is a little bit more intricate, so we need to spend a little bit of effort to write down the-- to find all the solutions of the Dirac equations. So that's what we will do. Unfortunately, we are not going to finish today, and then we will have a long spring break. So I hope you still remember what we talked about today when you come back. So this is [INAUDIBLE]. So now, we have classical solutions. So by construction, the Dirac equation has solutions. We know must be proportional to e to the i k x with k squared equal to minus m squared. So this is by construction. Because we square it, we get the Klein-Gordon equation. So this k mu would be omega k and k. But this is not enough because the Dirac equation-- because psi have four components. So this just determines one factor of it. And we also have to determine it's-- the behavior of all its four components. And so now, that's we are going to do now. So we will separate the solutions into two types. So one type is we call psi plus x, which corresponding to uk x, the expression ik x. And another we call the psi minus x, which corresponding to vk x. vk-- yeah, sorry-- no x-- because it's minus ik x. Yeah, so uk vk, they just-- they're all four-component spinors which have the same thing as psi, because this is just some number. So because this is-- because k have the positive frequency, so this is some kind of-- called positive energy solution. And this is called the negative energy solution. But it's the same thing as in the scalar case. They don't really-- there's not really-- when we quantize the theory, there's really no negative energy excitations. It's just a name. So we call the positive energy and negative energy just a name. So the actual physical excitations always have positive energy. And this uk and vk, they are all four-component complex vectors. Yes? STUDENT: So it seems like the psi should be labeled by k. PROFESSOR: Yeah, psi-- so this is just the basis of solutions. Indeed, we should label them by k. Good. So this-- so essentially, we just-- in the scalar case, you just expand it in terms of plane wave, and then you just get some constant. So here, its a little bit indicate we have a vector. Now, we need to solve these vectors. So our goal is to solve these vectors. And then you just plug them into the Dirac equation. Just plug these two into the Dirac equation, and then you get the equations for the uk and the vk. And I will also suppress the k in the uv and v just for notational simplicity. So when you have the gamma mu-- partial mu minus m, psi equal to 0, you just plug the psi plus minus m, and then you find that the following equations for u if you get i k slash minus m u equal to 0, and the i k slash and the minus-- yeah, I think-- let me just double-check the sign. Yeah, indeed. So you get i k slash plus m V equal to 0. So you get these two equations. So our goal is just to solve those equations. And the k slash, just as defined before, it just defines as in the partial slash k slash equal to mu gamma. Good. So you can also work out the complex conjugate. So let me just write down the equations, because sometimes they will be used later. So the u bar i k slash minus m equal to 0, the complex conjugate. And v bar i k slash plus m. So now, let's try to work out the uk and the vk by solving those equations. So we do this by-- you can, in principle, do it by brute force. So after all, you just-- these are just 4x4 equations-- 4x4 matrix equations. Just linear algebra. You can-- in principle, we can solve it. But the physicists are often lazy, so we often-- still even for the problem, you can solve-- we still look for shortcuts. And so in this case, there are two possible shortcuts. There are two possible ways we can do, and so let me describe both ways. Actually, I think we only have time to describe one way. So to find the explicit form of u and v-- so one simple thing to do is let's just consider in the simple case-- say, consider the particle is at rest. When the particle is at rest, then the omega is just equal to m, and then k equal to 0. And then that equation just becomes i m gamma 0 minus m u equal to 0 and i m gamma 0 plus m v equal to 0. I think-- do I-- so I think the mu-- oh, yeah. Sorry, I should have a minus sign here because of the-- it's the k upper index. So it's the k upper index equal to omega. Then k lower index equal to minus omega. And here, when we contract with gamma 0, we use the lower index, so we have a minus sign here. So essentially, this just becomes-- so m can be canceled on both sides. So essentially, it just becomes the i gamma 0 u equal to minus u and i gamma 0 v equal to-- OK. So this just tells you essentially gamma u-- u-- it's an eigenvector of u and v. They are just essentially the eigenvectors of gamma 0. And so let's just now-- to write them explicitly. So now, let's use the explicit representation of gamma matrices. Let me just copy this. So now, let's use this representation with the gamma 0 is equal to i0 0i minus i. And then gamma i is equal to 0i sigma i minus i sigma i0. And then now, if you plug in this gamma 0 into those equations, and then you find the equation for u and v becomes very simple. For the u, it just becomes 0, 0, 0, 1 u equal to 0. So again, this is 2 by 2 blocks because this here is 2 by 2 blocks. And for v, just the 1, 0, 0 v equal to 0. So that means u-- you can just take it to be the upper-- so that means the solution of this equation-- so that means that the solution of this equation for u-- so now let me write the upper index 0 means that this is for the 0 momentum. So here, we can just choose to pick psi 0. So psi is some arbitrary two vector. For v, we can just choose the lower 0 eta. So psi eta are arbitrary two vectors-- two complex vectors-- two vector-- two component vectors-- two component complex vectors. So once you have u0 and v0, and we can choose a basis. For example, u0 1 equal to 0, 1, -- 1, 0, 0, 0, and u0 2 equal to 0, 1, 0, 0. And similarly, for v0 1 and v0 2. So you can choose to be here 1, and-- so you can just choose as a basis-- you can just choose [INAUDIBLE]. And now, once you have-- so once you have the vector at k equal to 0, for general k, what do you do? Yes? STUDENT: Lorentz boost. PROFESSOR: Yes, just do a Lorentz boost. So we know the matrix S. And then you just-- for the general k, you just S u0 and vk Sv 0. And then you can find the behavior general k. But this is-- but this is easily said than do. To do this actually is not quite easy. Even though this sounds like a great idea-- OK, let's find the 0 momentum and then let's just do a boost. But this step is still a little bit tedious. But still it's doable and a little bit simpler than solve the original equation by brute force. But there's, again, still another simpler methods, which you can actually just guess the answer. You don't have to do any calculations. You can just guess the answer-- guess the solution for the full equation. And I think we don't have time to talk about it today, and so we will talk about it next time. And yeah, so hopefully, you still remember what we talked about today when we come back. And hope you have a good spring break. Yeah. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_2_Symmetries_and_Conservation_Laws.txt | [SQUEAKING] [RUSTLING] [CLICKING] HONG LIU: Let us start. First, I forgot to mention last time I will always use the metric convention for Minkowski spacetime to be given by this one. So the time component-- I have minus 1. And then the spatial component is positive 1. And yeah. So also, let me just first remind you what we did in last lecture. So in the last lecture, we talked about the principle of locality, which is a powerful principle. And so this naturally leads to the concept of fields. And so I will generally denote fields-- say, using this kind of notation. Yeah, and when we talk about fields abstractly, we will use this kind of location. And so phi_a is just a label for different fields. And also-- so the fields depend on spatial coordinates and also depend on time. So the field coordinate-- you can think of this spatial dependence as the label for the fields-- for the degrees of freedom. So at each spatial point, the degrees of freedom-- and the time describe its evolution. OK. So essentially, this is like-- so classical field is like classical mechanics degrees of freedom but now with the infinite number of degrees of freedom. OK? So the classical field theory is like classical mechanics but now with infinite number of degrees of freedom. Because now you have some finite number degrees of freedom for each point in spacetime. OK? And the principle locality also implies that the action for such dynamical variables-- so the fields are our dynamical variables-- and also implies that the action has a local form. So the action have the form that is always a single spacetime integral and some function, which is called Lagrangian density, which is a function of the fields-- say, phi at some point at the point x, which-- and then also its derivatives. And so because of locality, then the action has a very simple form. OK. And then, as in classical mechanics, you can introduce canonical momentum, conjugate to your dynamical variable. So we have momentum density. So you conjugate to phi a each point. OK? So this is the phi dot, which is the time derivative of phi. And then you can also introduce the Hamiltonian density, which is the pi a phi dot a. So a is summed and then minus L. OK? And then the Hamiltonian is obtained by integrating this over space. Yeah. And also, the equation of motion-- when you do the variation of the action and you find the equation of motion, it's given by the following general form. OK? So the equation of motion is given by this general form. And when we study field theory, as we study, say, any other subject, we always start with simplistic examples. OK? And the simplest example would be, say, a single scalar field, OK, if you can see the single scalar field, which you have low index. We just have phi, which now depends on spacetime point x. And then the simplest action for such a scalar field-- you can just write down based on a general principle-- say-- yeah. So you kind of have this form. So we discussed last time the simplest action for scalar field have the following form-- partial mu phi, partial mu phi, and then plus m-squared phi-squared. And then the sign here is determined because of the metric convention. OK. So this is the simplest scalar field theory we can write down. OK? So here, I have written here a quadratic function for phi, which we wrote as a general function last time. And here, the kinetic term comes from the Lorentz symmetry. So this is the simplest derivative term you can write down which respect Lorentz symmetry. You can have more complicated terms. You can have this term, for example, squared. OK? But that would be more complicated. So this is the simplest one. And so this will be the simplest field theory we will study first. OK? And yeah. So in this case, for this example, the momentum density just equal to phi dot-- just time derivative of phi. And then the equation of motion is given by a linear equation phi. It's given by that. So this is a very simple theory with very simple equation of motion. But later, we will see-- I actually will teach us a lot about general field theory. So it's a very quick summary of some of the main points of last lecture. Do you have any questions? Yes? AUDIENCE: You haven't specified if phi a is either complex or real. HONG LIU: Right. So phi here is a real field. Yeah. Yeah. So phi just take a real value. Yeah. Other questions? Good? OK. So now we will talk about the second topic related to our discussion of classical field theory, the symmetries and conservation laws. So when we have a theory, say, with such an action, we say the theory has a symmetry. So the definition of the symmetry is the following. So symmetry is some transformation of each field phi a into, say, some function of phi. So this new phi a can, say, depend on all possible fields. OK? And so some transformation of your dynamical variables-- so here, prime just means a different function, it does not mean derivatives, OK-- which leaves the action invariant. OK? So whenever happens-- and we say this is a symmetry. So for example, in this example, OK-- let me give you a label to this equation. Let me call this equation star. So in the example of star, say, for example, the symmetry includes, say, translation. OK? So if you imagine we take a coordinate transformation and we do a shift in coordinates, a constant shift-- OK? Let's imagine we do a constant shift of the coordinates. And a mu is a constant. So a mu are constants. OK? So it's a constant vector. And now if we assume that the phi transform as the following-- so when you change the coordinates, it essentially just changes the label. OK? It changes the label on the fields. But the value of the field should not change. So that means that the phi evaluated at the original point should be the same. So that means the new transform of the phi evaluated at the new point, OK-- new x prime point-- should be the same as the value at the original point. OK? So this is the-- yeah. So suppose the phi transform like this. So the scalar field should transform like this under a general-- so this is the example of a spacetime transformation, OK, and the general spacetime transformation. as, say, this equation. OK? So you change the label. But the value of phi should not change. OK? So the new phi evaluated at the new location should be the same as the value of phi at the original location. OK. And so you can easily check it yourself. OK? So you can easily check it yourself. But I will not do it here. So this exercise for yourself-- that actually this action is invariant under this transformation with this change of x. OK. Any questions on this? So second transformation-- which this is invariant-- is a Lorentz symmetry. So Lorentz transformation is another kind of spacetime transformation on the spacetime coordinates. If you take x mu to x mu prime, you go to lambda mu nu x nu. So lambda mu nu is the constant Lorentz transformation matrix. OK? OK? It's the constant Lorentz transformation matrix. And again, under such a coordinate transformation, phi-- will transform this way. Phi should transform this way. And now you can check it yourself, OK-- and because of this contraction of partial mu, partial mu. And then this is actually a symmetry. OK? Again, this is left as an exercise for yourself. And in fact, I think in your Pset problem, you will do something similar. Any questions on this? Good? So examples like this kind of transformation for translation and then for Lorentz transformation-- so this is what we call the continuous symmetries. So continuous symmetries-- OK? So these are the transformations-- yeah, I should also give a label to this equation. So let me call it just star-star-star. So-- [LIGHT CHUCKLE]---- continuous symmetries are transformations, like star-star-star, involving continuous parameters. OK? And so both one and the two are clearly continuous symmetries. Yeah, they are continuous symmetries. So you can easily see from here-- so here, the transformation is a. And a is four numbers. OK? It's a vector. It's a four vector. And a can be arbitrarily changed. So it's a continuous parameter. OK? You can take it to be 0. You can take it to be 0.1 in all directions, et cetera. So it's something that can be continuously changed. And similarly, the Lorentz transformation contains continuous parameters. So do you remember how many continuous parameters does the Lorentz transformation contain in four dimensions? Yes? AUDIENCE: 16. HONG LIU: Hmm? AUDIENCE: 16. HONG LIU: No. [LIGHT CHUCKLE] Yes? AUDIENCE: Six. HONG LIU: Yeah, six. It's because you have three rotations. Because you have three spatial directions, you have three rotations. And then you also have three posts-- yeah, so altogether six. So here, there's six continuous parameters. And here, there's a 4. OK? So here is 4. So both are continuous symmetries. OK? And in contrast, this Lagrangian have-- this action has a lot of symmetry which is not continuous. Can you say what that symmetry is? Good. Priority is a good answer. Yeah. But there's still something slightly simpler. Yeah? So notice that here, it's all quadratic in phi. And what do you observe? So here, there is symmetry. A lot of the symmetry is phi. It goes to minus phi, OK, because it is quadratic in phi. And also, of course, there's a symmetry called a parity, which in the case of x goes to minus x. So the spatial direction x goes to minus x. And then phi, again, transforms as this. OK? And then you can also check-- this is also a symmetry. So in both of these cases, they don't involve continuous parameters. And so these are called discrete symmetries. OK? So there's no continuous parameters. And yeah. In this case, both of them are called Z2. And this Z2 means that if you do the transformation once, it goes back to the identity. OK? So here, if you do the twice-- yeah, if you do it twice, it goes back to -- if you do it twice, it goes back to identity. And this is the same. OK? So here is a Z2 transformation. Also, symmetries can be separated into continuous symmetry or discrete symmetries depending on whether you have continuous parameters or not. But symmetries can also be separated into other different-- there's another classification of the symmetry. It's called the gauge symmetry and the global symmetries. So there are global symmetries-- corresponding to the transformation parameters are spacetime independent. OK? So both of these are examples of global symmetries because they are just constant. OK? They don't-- a mu don't depend on spacetime. And the lambda don't depend on spacetime. OK. And you can also have local symmetries, in which case the transformation parameters are spacetime dependent. OK? So an example of the local transformation which you may remember is the so-called gauge transformation in E&M. So E&M, you can-- yeah, anyway. So later, we will see an example. We will go back to E&M again. And you will see examples of this local symmetry. Any questions? Yes? AUDIENCE: So if the transformation changes the action but doesn't change the equations of motion, is that still a symmetry? HONG LIU: Yeah. Yeah, phi deals with the action-- yeah, that's a very good question. So by definition, if it leave the action invariant, we say it's a symmetry. And in general-- actually, almost always, if it leaves the action invariant, actually it also leaves the equation of motion invariant. Yeah. Yeah, but it's not guaranteed. Yeah. Just purely from a mathematical point of view, it's not guaranteed. But for all physical examples, it's almost always the same. Other questions? OK. Good. And now there's a very simple-- it's simple but a very deep connection between the symmetry and the conservation laws. So that you may have already learned it in-- yeah, actually I forgot when I learned it myself. So some of may have learned it in high school. Some of you may have learned it in 8.01, et cetera. Anyway, so there's a connection between the symmetry and the conservation laws. And so any conservation laws can be understood as a consequence of symmetry. So in classical mechanics, you may remember that the time translation-- this leads to energy conservation. And the spatial translation symmetry then leads to the momentum conservation. And if it's a rotational symmetry, it then leads to the angular momentum conservation. OK? So this should be something you already know, say, from your classical mechanics. But in classical field theory, this can be generalized. OK? So here, there's the Noether theorem-- so first discovered by Emmy Noether, the Noether theorem. She said any continuous global symmetry leads to conservation laws. OK? OK. It leads to a conservation. And so no matter what kind of symmetry, OK-- in addition to those things we are familiar with, but no matter what kind of symmetry-- but any time you have a continuous global symmetry in your system, then you have conserved. You have conservation. OK. So now let me give a proof of this Noether theorem. OK? So before I prove it, do you have any questions regarding its statement? Yes? AUDIENCE: So what's the definition of-- so you said a continuous symmetry is one that depends continuously on a parameter. But if you wanted to write out, like, equation star-star-star in a way that specializes to continuous symmetry, is there a way to do that-- to show just how to check if something is a continuous-- HONG LIU: Oh, yeah, yeah. You just check whether there's a continuous parameter. You just check whether there is a parameter or not. Yeah. Does this answer your question? Yeah. Normally, you can just see whether there's a parameter. Yeah. [LIGHT CHUCKLE] If you write down the transformation explicitly, you will be able to just see. If we go over here, there's no transformation parameter. You can just see. And then there, for the Lorentz transformation and translation, you can just see it explicitly. OK. So an important aspect of a continuous symmetry is that, because you have a continuous parameter-- so you can continuously relate it to identity. So trivial transformation-- you just don't transform at all, right? So in this case, the a mu is equal to 0. And so in this case, the trivial transformation-- just the lambda mu nu-- is the identity matrix. And then you can imagine a slightly-- a thinning on the infinitesimal rotation or infinitesimal boost. And then that's corresponding to a so-called infinitesimal transformation. OK? So here, you can also just translate a mu a little bit, OK-- so very close to the identity. And so essentially, for any-- OK? So for any global continuous symmetry, it has an infinitesimal form. OK? It's just when your transformation parameter is very close to the identity, OK, very close to the 0. And so for example, for a general transformation like this, if the transformation is close to the identity, then we can always write it in the following form-- phi a goes to phi a prime, which is equal to phi a, and with some parameter-- small parameter epsilon-- and some function-- f_a-- and then, say, phi b, OK, and the derivative of phi b, et cetera. OK? So this epsilon is the infinitesimal transformation parameter. OK? So if you have a continuous symmetry, there's always, say, an infinitesimal transformation which are close to the identity. So this epsilon corresponding to this case for a mu is very small. Or for the Lorentz transformation case corresponding to rotation angle, it's very small-- or the boost is very small. OK? Yeah. So the epsilon can be any of those parameters. And then this f, it can be some arbitrary function. OK? So this f can be some arbitrary functions of your fields. OK. Good? So in this case, yeah, the data phi, OK-- yeah, so the symmetry transformation-- so we'll use the rotation delta-hat phi a. So this symmetry was then given by the epsilon f a OK? So delta-hat here implies an infinitesimal symmetry transformation. OK. So now let's consider during a variation of S under this delta-hat, OK, since by definition of the symmetry-- because this is a symmetry, that means that delta-hat S should be equal to 0. OK? Because of the variation of the action, it should be invariant under the symmetry. And so now let's look at what's the consequence of it. So now we can just vary this L. Then the-- so this means that the variation of the L, the Lagrangian density, must be a total derivative, OK, since every transformation is proportional to-- yeah. So epsilon is a small parameter. So when we do the transformation, we only need to keep track in the order in epsilon. And so the transformation of the L must also be proportional to epsilon. OK? So since this must be invariant, that means the variation of the L must be a total derivative. So k mu would be just some-- so for some k mu. OK? For some k mu. So there must exist some k mu. And then this k mu can also be 0. In that case, the Lagrangian density is invariant. Yes? AUDIENCE: Is delta-hat the variation under your transformation? HONG LIU: Sorry? AUDIENCE: Is delta-hat the variation under the transformation? HONG LIU: Delta-hat just denotes such a transformation. It just denotes such a transformation. This is not the general variation. This is very specific. This is a symmetric transformation. And so under the symmetric transformation, your action should be invariant by definition because this is a symmetry. And that in turn-- because the S is the integration of L, then that means that the transformation of L, the Lagrangian density, must be a total derivative. And here, we keep track of everything in the order in epsilon. And so this must be proportional to epsilon times the total derivative. OK? And so we must have this structure, yeah, for some k mu, OK, which can be 0, which can be 0. OK? Good? So now let's look at what's the implications of this equation. OK. So this equation is the constraint imposed by the symmetry. OK? Now let's look at the implications of this constraint. So now we can just do the variation of L. Now we can just do the variation of L. Yeah. Maybe I will keep that equation. And then I will do this board first. AUDIENCE: So I have a question. Does this problem-- HONG LIU: Yeah? AUDIENCE: --show if something is a symmetry of your action for your system, would you directly vary the thing-- like, parameter and then show that your action doesn't change? Or would you do that infinitesimal form and then show that-- HONG LIU: So we want to-- AUDIENCE: Yeah, I don't know. I was just-- HONG LIU: Right. So we want to use the fact that such a transformation is a symmetry to derive what is the constraint on the Lagrangian, OK, what is the constraint on the theory. Say, we assume that somehow the theory has a symmetry. And then we want to see what is the constraint this put on your theory. Yeah, that's right. We are proving the laws of the theorem. Yeah. Good? Other questions? OK. So that equation is the implication-- is the constraint imposed by the symmetry. And we want to see what this equation tells us. OK? And for this purpose, we just do the variation. So the delta-hat L-- so because L is just a function of phi-- and then we have partial L, partial phi a, delta-hat phi a; and then we have L, partial-partial mu phi a; and then partial mu delta-hat phi a. OK? So we just did the variation. And now-- OK? And so-- yeah. And now we are going to use the equation of motion. OK? So from the equation of motion-- I think I erased-- yeah, from the equation of motion, then partial L, partial phi a-- this is just equal to partial mu, partial L, partial-partial mu phi a. OK? So we just plug this into here. So when you plug this into here, now you notice this becomes a total derivative. OK? So you partial L, partial mu phi a, and delta-hat phi a. OK? So you find-- after using the equation of motion, this variation of L have the following form, which is the total derivative. And this should be equal to the right-hand side. OK? It should be equal to the right-hand side. Yeah. So yeah. And then here we find-- so this should be equal to epsilon partial mu k mu. OK? So we combine both sides together, plug in that this is equal to epsilon times fa. And then we conclude, OK, partial mu, partial L, partial-partial mu phi a, and fa minus k mu, we see equal to 0. OK? So if we call this J mu and then we have a conservation equation-- partial mu J mu equal to 0. OK? So we have a conservation law. So remember, conservation law is just to have a vector which satisfies this equation and then-- yeah. So more explicitly-- yeah, so the Jmu is just this combination. OK? Jmu is just this combination. So any questions on this? Yes? AUDIENCE: Is this partially with the shift in presence because it's the derivative with respect to spacetime here, but if we have a conservation law from the derivative with respect to time equal to 0 but not space? HONG LIU: Yeah, yeah. I will-- yeah. Yeah, this is the stronger version than your normally-- yeah. So this is the spacetime version of the conservation law from your normally classical mechanics. Yeah. So this is a field theory version of that. Yeah. Yeah, I will elaborate that equation a little bit. Yeah. Yes? AUDIENCE: So if you only transform the field, not the x and t for the-- HONG LIU: Yeah. AUDIENCE: --the effect is to eliminate that derivative of your operator for everything? Is that right? HONG LIU: Say it again? AUDIENCE: I mean, the reason you include the derivative of the field is because you are including the effect of transforming the spacetime coordinates. HONG LIU: So here-- yeah, yeah. So here is a general formulation. So this transformation can be general. It doesn't have to be, say, a spacetime transformation-- we said earlier. This is just some abstract symmetry, just some arbitrary transformations. AUDIENCE: Yeah. I mean, if you only go through the symmetries that have changed in the field but not the symmetries that haven't? HONG LIU: Yeah, because spacetime variable is a dummy variable. In the action, you can always get rid of that. The x change-- x is just a dummy variable in your action because you integrate over x. Yeah, yeah. AUDIENCE: So is it possible to interpret higher order derivatives not including that pathway? HONG LIU: Yeah, yeah, yeah. This definitely include the-- normally-- yeah, you can in principle have as many derivatives as you want. But normally, if we have an action which only contains first derivatives, then your symmetry transformation will only involve first derivatives. AUDIENCE: So, though, we're just not discussing the case of transforming-- I don't know. We're-- HONG LIU: Yeah, yeah, yeah. In principle, you can have it. In principle, you can have it. Yeah, I'm just writing this for simplicity. Yeah. Yeah, good. Maybe you can ask after the class, yeah, if it's not clear. Other questions? OK. Good. So let me elaborate a little bit on this equation. So this equation you may have seen in E&M, which you often call the continuity equation. If you write this equation separately, it has the following form-- partial 0 J0 plus partial I, or the divergence of the spatial component. So J mu-- if we write it in terms of the-- OK, explicitly, then you have this form. So this is so-called the continuity equation-- just the time variation of the density. OK? The 0 component that you consider as some kind of density-- it's the same as the divergence of the current. OK? And in particular, you can define a charge, which is the total spatial integration of J0. OK? So if you integrate both sides over the total volume-- and then this just becomes partial 0 Q. And this term becomes a total derivative. And then you can convert it into a boundary term using the Gauss's law. And then, normally, the current vanishes at infinity. And then you just have the charge conservation. You just have charge conversion. OK. And yeah. So this is the field-- so normally in classical mechanics, you just have this equation. OK? And this is the field theory version of your conservation. Yes? AUDIENCE: So is there a way to maybe to know what it is and how we got it? HONG LIU: Sorry? AUDIENCE: Is there a way to calculate what J represents? HONG LIU: J represents? AUDIENCE: Yeah, what it represents? HONG LIU: Yeah, yeah, yeah. If I give you a specific-- yeah, you will do it in your Pset. So if I give you a specific Lagrangian-- if you start a specific transformation, and then you will be able to find all those quantities-- you will be able to find the k explicitly. And you will be able to find all those quantities explicitly. Then you can find the J. Other questions? Yes? AUDIENCE: Minus delta L of 0 implies that delta L needs to be over here. HONG LIU: Right. It's because if we have here-- so here, if we have delta s equal to 0-- and then you can just put the delta in. And then this quantity that is integrated over that-- if this is 0, then this has to be a total derivative Other questions? Yes? AUDIENCE: So what happens if it's not a global symmetry? Where does this argument break? HONG LIU: Right, yeah. Good. This is a very good question. So yeah, I will not have time to go into here. Let me just very quickly mention. So this is called the first Noether theorem-- and when epsilon is spacetime independent. And so when epsilon is not spacetime dependent, then the story is a little bit more complicated. So the epsilon will be inside this total derivative. You cannot take it out, et cetera. And then the story changes a little bit. And then there's something called the second Noether theorem. And actually, in that case, when you have local symmetries when epsilon depends on spacetime, instead of finding conservation laws, you find that your equation of motion-- not all-- you find some parts of the equation of motion are redundant. Yeah. Yeah. And if you-- it's not difficult. But actually, in principle, I can put it in the Pset problem, if people really want to see it. Yes? AUDIENCE: So this is something to think about the transformation, the vicinity of the identity. What happens to the information of the transformation outside of-- HONG LIU: Good, good, good. So this is the beauty of physics, which you may already have seen in quantum mechanics. So whenever we see a symmetry, no matter how complicated that symmetry is-- say, some continuous symmetry-- it's enough just to understand that symmetry near the identity, near to the infinitesimal transformation. It's because any finite transformation you can build up from just adding up the infinitesimal transformations. So once you know the infinitesimal transformation, actually, essentially up to some global or topological structure, essentially it determines the full finite transformation. Yeah? AUDIENCE: So we can define transformation by taking the derivative of things that are different that obviously-- HONG LIU: No, no. It's always a symmetry, right? Because each step is a cemetery. And so you add them up. You add them up still a symmetry. AUDIENCE: OK. But a sequence of different symmetries, for example? HONG LIU: Oh, if you do different-- yeah, they're still a symmetry. Yeah, because by definition if something transforms that's invariant into another transformation, it's still invariant. Good? Other questions? Yes? AUDIENCE: For discrete symmetry, where we don't have this epsilon, can we do something similar? HONG LIU: Sorry? AUDIENCE: For discrete symmetries-- HONG LIU: For discrete symmetry? AUDIENCE: --do we have to do this analysis? Or can we can do something similar? HONG LIU: Yeah. In general for discrete symmetries, you don't have such a conservation law. You don't have such a conservation law. But you can still sometimes define some discrete quantum number which is conserved, like the parity which-- in the parity case. But you don't have a current. OK? So here, J mu is called the conserved current. But you won't have a current. Good? Good. So this concludes our very quick discussion of the important features of classical field theory. And with this preparation, now we can start talking about or start thinking about going to quantum mechanics, talking about the quantum fields. OK? And so here is a good place to pause a little bit from what we already said about the classical field theory to anticipate a little bit about the quantum fields theory. OK? So here, I will restate our goal for quantum field theory, which I said a little bit in the last lecture. OK? So now we have seen the classical fields. And our goal is to understand the quantum version of this story. OK? So classical field theory is classical mechanics with infinite number of degrees of freedom. And now we want to quantize this infinite number of degrees of freedom. And now this becomes quantum field theory. OK? And now we want to understand the quantum dynamics of such kind of phi fields. OK. And for example, an example is the E&M. We have electric and magnetic fields. OK? And you know the Maxwell equations. You can solve the Maxwell equations. And so this defines the classical field theory. And then we will tell you how to actually quantize such a system to understand the electric field and the magnetic fields are quantum mechanical. OK. Good. And that actually would be our goal. OK? So we will start with the simplest field theory-- just the scalar field theory. And then we will go to the Dirac theory, which describes the electron. And then, eventually, we will go to the theory which describes the electromagnetic field. So this is called the QED, OK-- quantum electrodynamics. So that will be the endpoint of this course. OK. So now let's say a little bit how we go through this classical field theory to quantum field theory, OK-- so just some general remarks. So before doing that, let's think a little bit how we go from classical mechanics, with a small number of degrees of freedom-- and how you go to quantum mechanics. OK? So let's recall. So in classical mechanics-- so let's just consider the simplest case, if you just have a single degree of freedom-- say x(t), OK-- just a one-dimensional particle whose motion just describes the x(t). And then, of course, the goal of classical mechanics is just to solve the equation of motion over x(t). OK? So if I write down the equation for x(t) and I solve it, then I'm done. OK? I'm done. I solved the classical mechanics. So when we go to quantum mechanics, what do you do? So this x(t) then becomes an operator. OK? So it becomes an operator. So the classical dynamical variable now becomes a quantum operator. And the equation of motion of x(t), if you remember-- and do you remember what it becomes? Yeah, exactly. It becomes the Heisenberg equation. So it becomes the Heisenberg equation for this operator x(t) now. OK? Yeah. So in quantum mechanics, there are normally two ways to describe it. So first is the Schrodinger picture. In this case, you have a wave function, which is the function of x and t. And then you have an operator, which is x. OK? And then, of course, you also have conjugate momentum, et cetera, which is defined to be x dot. OK? And so the wave function is a function of the eigenvalues of-- so x here should be understood as eigenvalues of this operator x-hat. OK? It's the eigenvalue of x-hat. So in the Schrodinger picture, you solve the evolution for psi. You solve the equation for psi. And then once you find psi, then you can calculate any quantities you want. OK? You can calculate any expectation values, any amplitude, et cetera. But the second way of approaching it is the Heisenberg picture. In this case, you-- the dynamical quantity is your operator. Now you're assuming the operator depend on time. So in the Schrodinger picture, your x is just some constant operator that does not depend on time. But in the Heisenberg picture, now your operator now becomes time-dependent. OK? And the equation of motion-- and then you solve the Heisenberg equation. OK? And then the state is invariant. OK? The state does not evolve with time. OK. So you solve for the Heisenberg equation for those operators. And then once you solve the Heisenberg equation, and then you can evaluate, again, your expectation value in any state you are interested in, et cetera. OK. So any questions on this? So this is a very quick review of what you did to go from classical mechanics to quantum mechanics. OK. So now in field theory, OK, we do the similar thing. So similarly, in field theory, we have classical fields. OK? We have classical fields. And then when you go to quantum, this become quantum operators. OK? So this is now quantum operators. Remember, in field theory, this x is always just labels of the space points. OK? It's not a dynamical variable. OK? The notation is a little bit-- yeah. Here, in classical mechanics, this is a dynamical variable. But in field theory, this is just labels. OK? So they're not dynamical. The dynamical variable are phi itself. So phi becomes an operator. OK? And then now the classical equation of motion for phi, which we derived-- and then become the Heisenberg equation, OK, for phi-hat. So again, here, you can do two pictures. You can do the Schrodinger picture, or you can do a Heisenberg picture. So for QFT, the Schrodinger picture-- OK. And you look at the wave functions of your dynamical variables, the eigenvalue of dynamical variables. So what is the generalization of this psi xt? So remember, here, psi is a function of eigenvalue of your dynamical variables. OK? So now, if we push this analog further, when you go to quantum field theory in the Schrodinger picture, now the wave function should be a function of phi (x) and the t. OK? So this phi x are eigenvalues of phi-hat x. OK? So in the Schrodinger picture, phi-hat does not evolve with time. And so you have the wave function, which is the function-- now become a functional-- because this itself is a function of space. OK? And yeah. And then you solve the Schrodinger equation for this. And then you have the operator. And then you have the operator. Yeah. And so you have-- and then the dynamical-- and then you have operator phi-hat x. OK? So that's what you do in the Schrodinger picture, in the Schrodinger picture. And in the Heisenberg picture, you forget about the wave function. OK? You look at the evolution of operators. So the Heisenberg for the field theory-- your Heisenberg picture for the field theory is that you look at the evolution of x and t. So now this just obeys the Heisenberg equations, which is just the quantum version of the classical equation for phi. So you look at the evolution of this. And then the state does not change with time. OK? The state does not change with time. Good. Any questions on this? Yes? AUDIENCE: When it comes to-- so we have the quantities-- HONG LIU: Yeah. AUDIENCE: --what happens to time in the Schrodinger picture? HONG LIU: So in the Schrodinger picture-- so remember, the x is a label for phi. But time is the evolution. But in the Schrodinger picture, the operators don't evolve. And so there's no time here. So we don't have time here. And we just have the analog of the x here in the Schrodinger picture for classical-- yeah, for quantum mechanics. And then the time dependence is in your wave function. So the wave function is a function of possible values, eigenvalues of this operator phi. Yeah. OK? And then the Heisenberg picture-- again, then you just focus on the operator equations. And once you solve the operator equations, and then you can calculate the expectation values in any state you want. OK. So now you can already maybe see a difference a little bit. So if you do the Schrodinger picture, you have to deal with this beast, OK, which is the wave functional, though, of all possible values of some function in space. OK? And if you have multiple fields, then this is a hugely complicated thing. And you need to write down the Schrodinger equation for it, et cetera. OK? But here-- but in the Heisenberg picture, we just solve the analog of the classical equation of motion, which we have already written down. And you just now interpret it as a quantum operator equation. So which one do you think is simpler? So that's why in quantum field theory we almost always use Heisenberg picture. OK? We always use Heisenberg picture. And we don't even-- rarely think about the wave function, even though sometimes this can be useful in some problems. But for most of the time, this is much easier. OK? So that's what we are going to do. OK? And so in quantum field theory, we will just use the Heisenberg picture almost all the time. OK? From now on, I will not talk about the Schrodinger picture. OK. Good. So it's very important you remind yourself about quantum mechanics in the Heisenberg picture, OK-- And because most of your quantum mechanical classes before maybe is all in the Schrodinger picture, solving the Schrodinger equation, et cetera. But now you have to change the perspective to think of everything in terms of the Heisenberg picture. And then that will make you-- quantum field theory much easier. OK? Good. Any questions on this? Yes? AUDIENCE: So in the Heisenberg picture, if we're not concerned about the wave function and all, then how do we determine, for example, the probability or the expectation value of where x is at some point from the momentum? HONG LIU: Good, good, good. That's a very good question. So this is actually also related to the kind of questions we want to solve in quantum field theory. So in quantum field theory, we often work with the vacuum state. So for example, here-- yeah. So when you can see that the QED-- most of the time in the quantum electric or magnetic field, they're in the vacuum state. And so we just consider the psi in the vacuum state. And so we don't have to consider-- yeah. Right. Yeah. And so normally in quantum field theory, there are preferred states we are interested in. But that's a lot of the reason that the Heisenberg picture is convenient. Yeah. You don't have to consider the general state, in general. Yes? AUDIENCE: So what is the physical meaning of the wave function as it goes beyond the-- HONG LIU: Hmm? Sorry? AUDIENCE: What is the physical meaning of psi here? HONG LIU: Oh, this is just some state you are interested in. Still, you have a Hilbert space. But now you just don't-- when you study the evolution, you only evolve the operator. You don't evolve states. Yeah, so that's the difference between the Schrodinger and the Heisenberg picture. AUDIENCE: But for example, in quantum mechanics, we get the expectation values of some operators with respect to the state. And we get, like, measurements-- the results and stuff. So what about here? HONG LIU: Yeah, it's the same thing. Yeah, same thing. Yeah. At this level, there's no difference between quantum field theory and quantum mechanics. Yeah, just think about quantum mechanics in terms of Heisenberg picture. Translate everything you learned about the Schrodinger evolution, et cetera, in terms of Heisenberg picture. Yes? AUDIENCE: And now the open state is conventional. HONG LIU: Hmm? AUDIENCE: The open state is-- HONG LIU: Yeah, that's right. That's right. Yeah. Yes? AUDIENCE: Am I right that the state we're interested in is generally the vacuum state. And what sorts of measurements-- HONG LIU: Yeah. So it's-- again, so-- yeah, it's a very good question. What I should have said is that the states that we are interested in are states which are close to the vacuum state. So you excite the vacuum a little bit. And yeah, it's not-- yeah, of course, in the vacuum, you don't have anything. And we are since close to the vacuum state. Yeah. And later, when we discuss things, you will see. Yeah. Good. Good. OK. So before going into quantum field theory, let's also make some remarks, have a short discussion on relativistic quantum mechanics. OK? So naively, if you have special relativity plus quantum mechanics-- so if you want to generalized-- so most of the quantum mechanics you learned is the non-relativistic quantum mechanics. OK? But now if you want to incorporate special-- we want to combine the special relativity with quantum mechanics. And then what should you get? What do you think you should get? AUDIENCE: [INAUDIBLE] HONG LIU: Hmm? AUDIENCE: [INAUDIBLE] HONG LIU: I expect-- yeah, just whatever come to your mind. AUDIENCE: [INAUDIBLE] HONG LIU: Hmm? AUDIENCE: [INAUDIBLE] AUDIENCE: [INAUDIBLE] [LAUGHTER] HONG LIU: That's the right answer. That's the correct answer. But I was hoping some of you will say it's relativistic quantum mechanics. [LAUGHTER] OK? So naively, when you combine these two-- in particular, if you read some old quantum mechanics books, they do discuss relativistic quantum mechanics. OK? So naively, that's what you get, OK, when you combine these two. And you say, oh, we just get the relativistic quantum mechanics. OK? But that's actually not a correct statement. Actually, strictly speaking, the reason now you don't actually learn much about the relativistic quantum mechanics is because, strictly speaking, relativistic quantum mechanics does not exist. OK? And whenever-- if you want to combine quantum mechanics with special relativity, actually, you get the quantum field theory. So the quantum field theory is actually forced on us if you want to unify special relativity and quantum mechanics. OK-- and even if you don't want to talk about the fields. OK? Even if you don't want to talk about fields, if you just want to talk about particles, still, if you want to unify these two, then it actually automatically leads to quantum field theory. So now let me just explain why this is the case, OK, so that you have some better appreciation of the quantum field theory. Good? So let's try to apply what you did for non-relativistic quantum mechanics, try to generalize it to derive some relativistic quantum mechanics. OK? Suppose you are the people in 1926. OK? Quantum mechanics just was discovered. And you say, oh, people have understood the non-relativistic quantum mechanics. Now let's generalize to special relativity. OK. So in non-relativistic quantum mechanics, how do we derive the Schrodinger equation? The way we do this is we start with the dispersion relation, OK, for a non-relativistic particle. And then we say this-- take E to I partial t, OK, p-- so say, if you kind of have a vector become-- yeah, so don't let me forget about h-bar. Just p then becomes spatial derivatives. And then this equation just become the Schrodinger equation for the free particles. OK. So this becomes the Schrodinger equation for free non-relativistic particles. And then you can add the potentials, et cetera. OK. So that's how you derive your Schrodinger equation for non-relativistic quantum mechanics. But if you are someone, say, in the early days of quantum mechanics, say, now let's try to generalize it to special relativity. And then it's easy to do then in the relativistic case. Then you have-- we just start with the relativistic dispersion relation. OK? You say, let's do the same thing. OK? Let's do the same thing. So this becomes I partial t. And this p becomes this one. And then we can just write this equation. OK? So now let's combine with these derivatives together-- or put all this on the same side. And then what you find is partial mu, partial mu. OK? That's what you get. So does this equation look familiar? OK? So this is the simplest scalar field theory equation of motion we have written down. But here, the interpretation is very different. OK? So earlier, we wrote down this equation. We say this is Klein-Gordon equation. So this is the Klein-Gordon equation. OK? So this is the equation of motion for the simplest free-field scalar field theory we wrote down before. But here, the interpretation is very different. So remember, the psi is not a field in quantum mechanics. It's a wave function for a single particle sitting-- so the psi is the amplitude for a particle at spatial location x at time t. OK? So this is-- and it's square gives its probability. OK? And this does not describe a field. OK? This is a wave function of a single particle, OK, even though they have the same equation as the field theory we wrote down earlier. And so that's what Klein-Gordon did. OK? So the Klein-Gordon-- we try to generalize non-relativistic quantum mechanics to relativistic quantum mechanics. And then they wrote down this equation. They say, ah, now we are immortal. OK? [LAUGHTER] Because we wrote down the first equation for relativistic quantum mechanics. And then soon realized, actually, this equation-- if you want to interpret it as the right equation for the wave function, we have actually lots of problems. There are various problems. Yeah, by saying "lots of"-- maybe it's a little bit of an exaggeration there. There are various problems. So yeah. So some-- yeah, let me call this equation-- I think I've used up my stars. I think to have a four star is a little bit too much. Let's just call this equation "1" for this section. So some immediate difficulties of interpreting 1 as a wave equation, OK, for a relativistic particle-- yeah, let me just save time. OK? So this is the wave equation for a non-relativistic particle. If you want to generalize to relativistic quantum mechanics, then you would interpret this as a wave equation for a single relativistic particle. OK. So first is that-- so if you remember-- so in this equation, OK, for-- if you go back to your early days of non-relativistic quantum mechanics-- so you remember, from this equation, you can derive an equation for the conserved probability, OK, a conserved probability. And that equation tells you that psi-squared should have the interpretation of the probability. OK? So-- but for that equation, you can show there's no quantity that can be-- that no quantity that can be used as probability density. OK? So probability density by definition should satisfy two conditions. So the first condition is that it's non-negative. And the second condition is that it should be conserved. OK? The probability should conserve. Otherwise, you violate the-- yeah. So you can show that this equation does not allow. This equation allows such a quantity, but this equation does not allow. OK? And I will not go through that myself. I think that will be in your Pset 2. OK? That will be in your Pset 2. You will show it yourself. And the second difficulty is that if you square that-- if you find the energy-- if you take the square root, then the energy in principle can be, say, plus-minus. You have two solutions. OK? E equal to plus-minus this quantity. So in contrast to that equation, there's only positive energy. OK? In non-relativistic, you just have positive energy. So classically, you can just say, let's throw away the second branch. OK? We can just throw away the second branch phi-hat, classically. But quantum mechanically-- OK? So classically, just ignore the second branch, the negative branch. OK? But quantum mechanically, this is not possible because you have the equation there. Then you automatically have the negative solutions. OK? So quantum mechanically-- and then you have a dispersion relation like this. So that dispersion relation has the following form-- is phi E as a function of p. And so you have a positive branch. And then you have a negative branch. And then you remember, in quantum mechanics, you have energy levels. And you have energy levels. And such particles-- the particles here then have a higher energy than particles here. OK? And so here, the energy level is higher than the energy level here. And there's always a non-zero probability quantum mechanically for a particle to go from some higher energy level to the lower energy level, OK, just like in the hydrogen atom. If you excite it, they always go to the lower energy level. So then quantum mechanically, such a thing cannot be avoided. OK? So you cannot just throw this branch away. OK. And so this will lead to instability because the energy can be infinitely negative. OK? It can be infinitely negative. And then all your particles will all go to infinitely negative energies. And then your system will be in big problem. OK. So these are the two-- are the most prominent problems. And then people tried many different ways to try to avoid them, et cetera, including some very ingenious solutions, et cetera. Though, we will not go into them. But let me just mention there's actually even-- if you can address those problems, still, the relativistic quantum mechanics will not make sense for a very fundamental reason. So these are more like superficial reasons why that equation does not quite work. OK? But there's actually a more fundamental reason why relativistic quantum mechanics even as a concept does not make sense. OK? So by definition, if you want to interpret this kind of thing as a wave function-- OK? So what's an interpretation of the wave function, which we already said? So this describes the amplitude for a single particle at some point, OK-- at some point x at time t. But now, if you have two particles, what do you do? You introduce the location for particle 1, and the particle 2, and the t. OK? So this describes two particles. And if you want to describe three particles, then you have to introduce more x. OK? Remember, that's what you did in non-relativistic quantum mechanics. So in non-relativistic quantum mechanics, this makes sense, OK, and just because there's no mixing between the different branches-- say, single particle, two particles. Because a particle cannot be created and destroyed in a non-relativistic system. But in a relativistic system, you can always create particles. OK? If you have enough energy, you can create new particles. A pair create electrons. OK? It happens in the accelerator all the time. Or electrons-- they can annihilate into photons. OK? So the particles are not conserved, OK, are not conserved. So this kind of wave function concept don't even make sense. So if you want to describe-- so that means, whenever you have special relativity and quantum mechanics together, you must have a framework which can describe arbitrary number of particles at the same time. OK? Because particle numbers can change all the time, OK, because of annihilation and the creation effect. And that cannot be achieved by this kind of concept. It cannot be achieved by wave function. It turns out miraculously that it can be achieved by field theory. It turns out that field theory, once you quantize it, automatically gives you a framework to describe arbitrary number of particles, OK, in the unified manner, OK-- single particle, two particles, arbitrary number of particles. And so this is one of the magic of field theory. So that's why the field theory plays such an important role in particle physics, is because the excitations of fields automatically provides the mechanism to describe arbitrary number of particles. OK. So we will stop here today. |
MIT_8323_Relativistic_Quantum_Field_Theory_I_Spring_2023 | Lecture_4_Canonical_Quantization_of_a_Free_Scalar_Field_Theory.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: OK, good. So last time, we quantized this theory. So let me just write down the theory again. So we quantized this theory. And so let me also just write down its canonical momentum, density, and then the Hamiltonian density would be 1/2 pi square. And the classical equation of motion-- and so we're not repeat the quantization procedure. And we wrote down the most general solution at the level-- can be written as the following. This factor is the convention. And then we have a k u k plus a k. So from now on, I will suppress the hat. So I will suppress hat. You should always now view it as a quantum field-- quantum operators. And the plus u k star. And the u k is the complete set of solution, which is given by exponential minus i omega t plus i k x. So this is just a basis of solution. And so this is the most general-- so this is the complete solution to the operator equation for this operator phi. And then you can also find its-- find its conjugate momentum density. So the pi, you can just take the derivative straightforward. I will not write it explicitly. And then we discussed that you can impose a canonical commutation relation which is that the phi t x phi t x prime equal to pi t x, and pi t x prime should be 0. And then phi t x pi t x prime-- then given by the delta function. So that's what we did at the end of last lecture. And then you can just plug it in-- plug this-- plug those expression into here. Then you can find the commutation relations between those integrations. So a k and a k dagger integrating constant of your operator equations. They are constant operators. And so when you plug them in, then you find the canonical-- then you find the commutation relation between a k and a k dagger. And then you find ak and a k prime equal to a dagger k and a dagger k prime is equal to 0 and a k a k prime dagger is equal to delta function in the wave space. So now, if you look at those expressions-- so they essentially look-- they essentially-- we essentially find the infinite number of harmonic oscillators. And each harmonic oscillator labeled by continuous-- by a set of continuous number k. And then the commutation relation is just like a continuum generalization of standard a a dagger equal to 1. And they are independent because of this delta function. And if k is not equal to k prime, and then you get 0. They commute. So you can see the story a little bit sharper. That indeed, we just get the infinite number of harmonic oscillators. So these are just commutation relations. This still does not tell you we have a harmonic oscillator. To see harmonic oscillator, we actually need to calculate the Hamiltonian. So now let's try to calculate the Hamiltonian. So now we can try to calculate the Hamiltonian. And so this is just-- can just do it-- straightforwardly just plug that in-- just plot it plug that expression in we wrote above. And then you just plug-in the expression for the phi and the pi into that equation, and then just straightforward to calculate it. And I will not go through the details. Of Yeah, it's a straightforward calculation. So then you find maybe after some minutes-- then you find that the following answer. So now, you-- when you plug them in, then you find that you can do actually do the spatial integral because you just have the plane wave. You can just do the spatial integral. And when you do the spatial integral, then that gives you a delta function momentum space, et cetera. And then you can reduce everything into a single moment-- a single k integral. And then what you find is that you get omega k. And then you find a k dagger a k. And then plus a k dagger. So let me see. Yeah, actually-- yeah, I think 1/2. Just get that. So this is the standard expression for harmonic oscillator now. Actually, it's exactly identical to just some-- yeah, integral is like sum-- just sum over continuum harmonic oscillators. And each with frequency omega k, and each harmonic oscillator is labeled by this number k. And so we can do from the standard trick to write it actually as to write as a commutator. So we can just write it as-- and then we just have this. So we combine these two terms together and introduce a commutator. You just have a dagger a and then plus the commutator between them. So in the standard story, this is equal to 1. Then you get the standard answer that you get omega a dagger a plus 1/2. But here, we can again-- let me just write one more step. So this is k omega k a k dagger a k, then plus E0. And now E0 is the sum of all the zero point energy of all the harmonic oscillators. So 1/2 omega k-- except that the tricky thing is the following. Except now, we have this commutator of a k with a k dagger. In the standard case, this is equal to 1. But in our case, when you set k prime equal to k, you get the delta 0. So we get delta 0. So then you get that 2 pi cubed delta 3 0. And remember, this 0 is in momentum space. And so this is in k space, not in the-- we have two kinds of delta functions in the-- delta function in k space, delta function in coordinate space. So here, I put the subscript k-- here means that this is the delta function 0 in k space. And so this is the zero point energy. So this zero point energy is divergent, but we will comment on that a little bit later. Do you have any questions on this? Good. So remark-- let me just make a couple of remarks. First is that-- so something interesting happened. So when you plug-- when you look at the expression for phi and the pi, both phi and the pi depend on time explicitly. So there's a time dependence here. And here, we only integrate over spatial direction. So in terms of integration, we don't do anything with time. But what do you notice here? What do you notice there? STUDENT: So you drop the time dependence? PROFESSOR: Yes, it's not to say we drop time dependence because time dependence disappeared. Do you know why? STUDENT: I mean, the Hamiltonian doesn't explicitly depend on time. PROFESSOR: No, the Hamiltonian-- phi is time dependent, pi is time dependent. If you plug-- if you plug those expression into here, certainly this-- certainly inside there is depend on time explicitly. This guy is depend on time explicitly. But in the end, after you do the integration over spatial directions, then you find in the end the time dependence actually cancel during the calculation. So do you have a guess why somehow this should cancel? Yes? STUDENT: Energy conservation or something? PROFESSOR: Yeah, exactly. So you have in your pset that the time translation-- so this system is time translation symmetric. It means that you should have the-- it means that the H is a conserved number. It's a conserved quantity. And by-- so indeed, you see that this is independent of time. This is a conserved quantity. We see it explicitly. So there's no time dependence since H is a conserved quantity. So this integration is absolutely crucial, because if you don't do integration, Hamiltonian density is not-- it's only the total Hamiltonian is conserved. So this is a comment. And the second -- so since I only have two comments, let me just call it one rather than call it 0. So for the second comment is that-- yeah, just repeat. We already said-- you just have OFT of phi. And we see this QFT of phi reduce this to essentially a system of a continuum of harmonic oscillators labeled by k with frequency omega k. So we call this omega k is defined to be k squared plus m square. So as we mentioned last time, the fact that we actually see infinite number of harmonic oscillator is actually not surprising from the perspective that this field theory can be written as the continuum limit of a chain-- say, of atoms connected by springs. So when you have a chain, we wrote down a one-dimensional system. You can easily generalize to three-dimensional. Yeah, so you have those chains of atoms connected by spring, and they are just coupled oscillators. And then you can just diagonalize them, and find the normal spectrum, and each of them is just a harmonic oscillator. And so when we find those solution by doing Fourier transform-- and essentially, we are just diagonalize those interactions between those springs. And yeah, so that's why we get infinite harmonic oscillator. So from that way, if you think about it, it's totally unsurprising we get the infinite number of harmonic oscillators. But it does surprising-- it is surprising if you think about it from the point of view of a field theory. So if you find when you quantize a field theory, and then you find in the end you get a bunch of harmonic oscillator. And before doing that, it's hard to-- before you actually carry out the quantization, it's actually hard to anticipate that if you don't have this intuition from this discrete system. Do you have any questions on this? Yes? STUDENT: So when you say that there's an infinite amount of harmonic oscillators, are you saying that there's an infinite amount at any given point in space, or that each given point in a continuum of location there is one harmonic oscillator? PROFESSOR: Yeah, that's a very good question. And so it's at each point, you have a harmonic oscillator. Yeah, at each point-- essentially, at each point you have a harmonic oscillator, and then they connected in space. Yes? STUDENT: Is there any reason why you didn't compute the integral in E0? You have a delta function in there? PROFESSOR: Sorry, say it again? STUDENT: So the integral if E0 equals. Why didn't you just-- PROFESSOR: Oh, this-- we cannot do the integral. STUDENT: No, up there-- the zero point energy. PROFESSOR: Oh, right. Yeah, we will talk a little bit later, because later, we are going to elaborate on this a little bit later because we are going to try to give interpretation of this. Yes? STUDENT: In homework 1, we found that a hat was time dependent. Is a hat not time dependent here, and is that why the Hamiltonian is not time dependent? PROFESSOR: Yeah, a hat is a constant operator. Yeah, because a hat is an integration constant of your operator equation. Yes? STUDENT: Is there an analog of coherent states for each harmonic oscillator in the QFT? PROFESSOR: Yeah, there's an analog of coherent state. Yeah, indeed. STUDENT: Does it tell you anything useful if you are in Heisenberg interpretation. PROFESSOR: Yeah, you can-- yeah, we will-- you will see-- yeah, this question will become clearer when we talk a little bit about the Hilbert space. Other questions? OK, good. Now, let's-- before we talk about the Hilbert space, we need to talk about two things. So here-- yeah, here, I have a dot. And the second thing-- so Hamiltonian is an important quantity. So this is one of the conserved quantity. So there are other conserved quantities. So there are other conserved quantities, and one of them is the conserved quantity corresponding to spatial translation. So the spatial translation gives you conserved momentum-- momentum conservation. So from spatial translation, then you get this conserved charge p i. Again, you should have done this in your homework to have the following form-- d3 x pi. So this is the Noether charge for the conserved quantity associated with the spatial translation. And we interpret it as a spacetime momentum. This should be interpreted as spacetime momentum. So this is-- so here, we have two momentum here. You don't confuse them. So this pi is the canonical momentum conjugate to this canonical variable-- pi is the canonical momentum conjugate to this field variable phi. And this is the genuine physical spacetime momentum. It's the momentum of your full system-- of your full physical system. And again, we can just plug it in-- we can just plug in the explicit expression of pi and phi. We already know its time evolution. We already know the time evolution, and then we can plug them in. And then you find that the answer is given by-- again, after some smaller number of minutes calculation. And then you find that the answer is given by this. In this case, there is no zero point energy. You just have this expression. And furthermore, you also have a Lorentz transformation-- Lorentz symmetry. And then that will lead to, say, the conserved charge associated with Lorentz transformation. And again, you can find it explicitly. And so in your pset 2, you will find the expression of this M mu nu in terms of phi and pi. And then you can again express this guy in terms of a and a dagger. But I didn't have the guts to assign as the part of the pset, so I used it as a bonus problem because that calculation-- involved a little bit extra calculation. Yeah, it involves a little bit slightly more tedious calculation, so I used that as a bonus problem for those people who like to have some more fun. Yeah, anyway, so this you can-- will see in your pset. Good. And then again, pi, you see the explicitly that this is time independent-- it's time independent. Good. And then the next thing is let's talk about this zero point energy. So this zero point energy, we can write it as the following. So we can-- yeah, so this is a constant. So you can take it outside of this k integral. This does not depend on k because the-- yeah. So this is just 2 pi cube delta 3 0. So this is-- remember, this is a k space delta function. And then you have the d3 k. So that's the expression we get. So now, let's try to understand what's the meaning of this term? Let's try to understand the meaning of this term. To understand the meaning of that term, we need to do a little bit of a mathematical trick. We need to a little bit of a mathematical trick. So remember the definition of delta function, we have 2 pi cubed delta 3k equal to the Fourier transform of the exponential of i k x. And so this is the definition of-- essentially, the definition of the delta function. And now, here, we want to set k equal to 0. So let's set k equal to 0 here. And then we find that these 2 pi cubed theta 0-- so let's remember this again in the k space. And then we just set k equal to 0 here. And then what is this? If you set k equal to 0, you get what this? Yes? STUDENT: The volume-- PROFESSOR: Yeah, that's right. STUDENT: --of the full space. PROFESSOR: You just get the volume of the full space. Let's just imagine you-- in order to make sense of this quantity, you can put the whole universe in a big box. Imagine the whole universe is in a big box, and then this is the total volume. And now, since this is the volume, now we have a very good interpretation of this quantity. And now we can write E0 as merely volume times epsilon 0. And then this quantity epsilon 0 then have the interpretation of the energy density. So now this is energy density. So this energy density, we can write it in one more step-- so continue to here. So we can write it one more step. Plug in the expression value of omega k. So this is the-- just 1/2. Sorry, I forgot 2 pi d k. And then you have k square. Yeah, so that's what you get. So how do you like this integral? STUDENT: So just a question to go back here. Wouldn't it be quicker if you just left the delta function inside of your integral and just say that it's one for your entire-- like for all k. So why are those two pictures equivalent? Because in that picture, you don't get the volume? PROFESSOR: Sorry, say it again? What's two picture? STUDENT: If you didn't pull out your delta function from your integral-- you just did it with-- that delta function is one for all k. PROFESSOR: No, because this is just a constant. Just when you have an integral of a constant, you can always pull it out. Because delta 0 does not depend on k anymore. Good. So how do you like this integral? Can you do it? STUDENT: No, it diverges PROFESSOR: Good. So it's fruitless to do this integral because this is divergent. And there's very good reason for this divergence physically. So this is just the energy density. So essentially, this is the energy per unit volume-- say in some unit volume. So let's imagine you have a unit volume here. Remember, this-- beside-- before this, corresponding to you take a discrete system with some lattice spacing a, and then you take the lattice, and each lattice point, you have a harmonic oscillator, and then you take a lattice spacing-- a equal to 0. So that means for any unit volume, when you take a equal to 0, you have infinite number of oscillators inside. So essentially, when you-- yeah, you have the lattice of oscillators at each point. Anyway, I will not try to draw it. And then when you take a to 0, you have any unit-- any volume unit, you will have the number of oscillators go to infinity, and that's where this divergence come from. It just come from-- in field theory, you have a continuum degree of freedom. So at each point, you have a degree of freedom, and within any volume, then you definitely have infinite number of degrees of freedom. so this is just from continuum of freedom. So this is the first time you see a diverging quantity in quantum field theory, but you will soon find that this is-- soon, you find that this is normal. It will be a fact of life. Just you will see divergence very commonly-- and so-- because you have continuum degrees of freedom. So the whole thing about quantum field theory is to find a way to deal with those divergences. And one of the key differences between quantum field theory and just finite quantum mechanics or finite number degrees of freedom is because of those divergences. And the big part of quantum field theory is to understand how to treat those divergences. And they actually don't affect your physics, but you do have to develop sometimes sophisticated tricks to treat OK, Good. And this infinite answer is also closely connected to a very famous problem you may have heard. It's called the cosmological constant problem. Because this tells you that any quantum field theory have an infinite, say, 0 point energy. And so you may say, OK-- so infinite-- whenever we say something infinite, there's one thing you always do to treat it. Can you guess what is the thing you always do when you see infinities? Yes? STUDENT: Just like subtract infinity. PROFESSOR: Yeah, that's one idea. That's one idea, but to subtract infinity is very hard. In your calculus class, when you subtract infinity from infinity, you can get infinity, so you have to be very careful. Yes? STUDENT: Just ignore the term. PROFESSOR: That's a lot of very good idea. [LAUGHTER] Indeed, that's what we often do. Yes? STUDENT: Divide by. PROFESSOR: That's also-- indeed. But to do all those things, you have to do one thing first. Yes? STUDENT: You might approximate it, it's like 1 over the Planck constant. PROFESSOR: Yeah, it's also very close. You need to do-- STUDENT: [INAUDIBLE] PROFESSOR: Hmm? STUDENT: [INAUDIBLE] PROFESSOR: Yeah, exactly. You need to find a way to make it finite first, and then you can subtract it. And then you can-- yeah, just like when you sum 1 plus 1/2 plus 1/3, et cetera. You get an infinite series. But if you want to estimate the outcome-- yeah, you get divergence. But you always try to cut it off the series, and then approximate definite answer. And here is similar. So here we always put some momentum cut-off. So imagine the momentum is smaller than some momentum-- say, some value lambda. And lambda corresponding to maybe to the scale which this quantum field is no longer apply, because nobody told you that this quantum field theory should apply at all length scales. Because for example, in this lattice model, this scale will be 1 over the lattice spacing. Anyway, so once you cut it off, still you get a pretty big number. And you cannot ignore it because this is a physical zero point energy. In principle, you can measure it. But in real life, we don't see it. So we have quantum fields flying around all the time, but we don't see this big vacuum energy. So this is called the cosmological constant problem. Actually, there was just a colloquium last week about this cosmological related to this cosmological constant problem. Good. Any questions? Now, we can talk about Hilbert space. So as a harmonic oscillator-- so we can first define the vacuum state. So now, what we will do is indeed in quantum field theory itself, we can just ignore this E0. From now on, we will just ignore this E0 because this just to give you overall-- just give you a constant, which does not do anything. It's like the potential energy E and M. Yeah, just like the unit-- yeah, just like the-- anyway, so from now on, we ignore this term. So often, I will just write the Hamiltonian-- I just write this term. But later, we actually see examples-- later, we will see examples. Actually, this E0 can actually have physical implications, but just for our current purpose, we ignore it. So now, the lowest energy state-- then it's clear from here because this is just a constant. So the lowest energy state ground state, which is we often call the vacuum state, means there's nothing there, is given by a k 0 equal to 0 defined to be for any k. So the ground state satisfy, which I denoted by 0-- then satisfy annihlated by this ak. And then the general state-- you can just-- so general states have the following form. Say I can write n k 1. So the k 1 oscillator excited n times. n k 2-- n k-- the k 2 oscillator excited n k 2 times, et cetera, which is given by-- which is proportional to a k 1 dagger to the power n k 1 a k 2 dagger n k 2 etcetera acting on 0 just like we have large number of harmonic oscillators. Questions on this? Yes? STUDENT: How do you know there exists some state that's annihilated by every single annihilation operator? PROFESSOR: You postulate, but you can actually write it down its wave function. Just as in the standard harmonic oscillator case, you can starting from a dagger annihlated by a. You can start from this equation to write down its wave function. And here, you can write down the wave function for the vacuum state to in terms of phi. Yes? STUDENT: In the notation for the ket there it looks like we have countable number of frequencies? Is that, like why is that? PROFESSOR: You mean here? STUDENT: Yeah. PROFESSOR: Yeah, here, I'm just saying it depends on which k are excited. Yeah, that's a very good question. I will comment on related issues very soon. But here, I just write down some state. Yeah, excite k1, k2 as I want. I'm not saying that this is the-- yeah, you can have as many as you want. STUDENT: Yeah, but if you use that notation it's still. PROFESSOR: That's right. It's true. Yeah, but I write it-- when I write it this way, it implies I only excite countable number of them. But in principle-- yeah, we will soon touch a point, which is related to your question. Other questions? OK, good. And so for example, the simplest excited state would be just excite one of them. So let me denote this by k. And then the simplest way you just excite two of them. So this k1 and k2 can be the same. If these k1 k2 are the same, and then just like the square. And here, also I will not be very careful about the normalization. And now, we ask, what are the physical interpretations of those states? So now let's ask what are the physical interpretations? So for this purpose, we can just look at their quantum numbers under, say, the Hamiltonian and under the spacetime momentum. So for example, H 0-- so if you act on H on 0, then of course, you get 0 assuming we throw E0 away. Say ignore E0 so far. And then so for the ground state-- and the p i acting on 0. Of course, it's 0. So you can see just because this have ak here. And so E also-- when we throw this away, you have this --yeah. And now, let's look at the excited state. So now to look at this state. So H acting on k-- so this answer is obvious because all different oscillators, they are independent of each other, and this just gives you omega k. We can just use our result for harmonic oscillator. And now, you can look at the momentum-- so this is a spacetime momentum operator acting on here. And then it has k i. If you look at this expression, when you do the-- act this on that-- you act this on that, and when you can just use the standard trick when you do the commutator. And then the particular k for this is chosen, and then the eigenvalue will be just given by that particular k i. So this is one step of calculation there, and you should do it yourself. 1-minute calculation. This a delta function will be generated, and then that will get rid of that integral, and then you will pick up this k i. So now, this equation has very-- now has an obvious physical interpretation. So that means that this state k has spacetime momentum. So now, I write down a four-momentum omega k n k. So this is exactly the momentum of a relativistic particle on shell. So this is of a relativistic-- so this is a momentum of a relativistic particle, because omega k is equal to k squared plus m square of mass m. Yeah, so this means that the p squared. So this is a four-vector squared is equal to m-- yeah, so this satisfies the p squared equal to minus m square. So we can just-- so it's very logical just to interpret this as a particle of mass m, because we can just interpret it as a particle of mass m. Good? So now, let's look at this one. So again, the calculation is very simple. So you find for that one-- so you find for that one-- for k1 k2, you find the energy-- energy just defined-- again, you find that this is an energy eigenstate of the H. It's an eigenstate of H with energy eigenvalue omega k1 plus omega k2. And it's a momentum eigenstate of P i with an eigenvalue given by k1 plus k2. So E, k are eigenvalues of H and P. STUDENT: Question. PROFESSOR: Yeah? STUDENT: Yeah, so could that mass be 0? PROFESSOR: So here-- so this mass is not 0 because omega k is defined to be this-- it's defined by my theory-- defined by my Lagrangian. So this is the parameter of your-- yeah, of your action. Yes? STUDENT: So I have a -- with the energy spectrum, does the spacing-- does it happen the same way where the spacing gets smaller as you go to higher and higher energies or? PROFESSOR: Yeah, for here, it's uniform. Here, just for each k, it's uniform. But of course, when you add them together, you get something very complicated. But for each k, you just uniform. So the very-- so the most natural way to interpret this is just have two particles of momentum of four momenta omega k1 k1 and omega k2 k2. And similarly, you can do this for any state like this. They are all eigenvectors of H and P. So n k 1, n k 2, you catch-- et cetera. So this corresponding to n k 1 particles of momentum. They're all on shell-- on shell particle of momentum k1 and n k 2 has two particles of momentum k2. So this tells you one thing. So now let me just make some remarks. Any questions on this before I make my remarks? Good. So the first point is that now you can see this can describe any number of particles. So mathematically, in our description, there are harmonic oscillators. But each excitation of the harmonic oscillator is from the spacetime point of view corresponding to a particle. So this is the beautiful thing of this theory. And then due to-- because of the commutation relation, a k 1 dagger, a k 2 dagger equal to 0 for any k1 k2. So you have full symmetry. So when you construct the state, you can just commute them as you want. So that means they're-- so full symmetry in permuting all these different particles in the general state. So this tells us these are bosons. And two is that all particles have positive energy. So even though that E squared equal to k squared plus m squared-- this equation have two solutions-- plus minus omega k. But when you look at physical state-- when you look at your state and look at the eigenvalue of state-- so all particles have physical energies, so you don't have this negative energy problem associated with taking the square root. Also you have total energy of a state is equal to sum of energies of all the particles. So this tells you there's no interactions between them. Because if you have potential energies between particles, and then that will change the energy. When you put the two particles together, will no longer be the same over the sum of the individual energy for each particle. And so that tells you there's no interactions. So this is a theory of free particles. Now, this is a good starting point. At least now, we have particles. Questions on this? Yes? STUDENT: So [INAUDIBLE] a k 1 let's say, and you have a particle with momentum k1 now, is there a way to change this particle's momentum, or if you apply again, you-- like in this picture, it's like you have another particle momentum k1. PROFESSOR: No, there's no way to change the momentum of a particle. So once you created the particle, it just goes straight. Yeah, it just does not change anymore. Momentum for that particle is conserved. STUDENT: Is there-- if you wanted to-- if you wanted to create a theory where you can change the momentum, is there a way, or is it just-- PROFESSOR: Yeah, there is a way. So after dealing with this free theory, and then we will consider the last simplest theory, and then that will introduce interactions. STUDENT: I see. So-- PROFESSOR: And when you have interactions, then the particle momentum can change. Yes? STUDENT: So the thing we're calling particles are like localized in momentum space, but not at all in position space. PROFESSOR: Good. Yeah, these are the momentum-- yeah, it's like the plane wave in the non-relativistic-- yeah. STUDENT: And if you were to try to localize it in position space like a Gaussian wave packet or something kind of like that, would you still be able to commute things and stuff? PROFESSOR: Yeah, so the commute things don't change because everything is built by a k, and they always commute, and so that won't change. But indeed, we will talk about the wave packet to localize in space. Other questions? OK, good. So also-- so the last-- we will talk a little bit about the technical point. It's that so far, we haven't talked about the normalization of such a state. So now, let's look at the normalization of state. So let's just look at this state. The single particle state-- let's look at this normalization. So let's look at k with k prime. So now, if you take the overlap of this with the k prime, and then you can reduce this to the commutator of a k and a k prime-- a k dagger and a k prime. So you will get just this. So again, this is a five-second calculation. So you find this if you do the overlap. But you already did in your p set-- this thing is actually not Lorentz invariant. This is another good thing. This is another good object under Lorentz transformations. But when we construct state, we would like to have our state to have good properties under Lorentz transformation. So we will choose a slightly different normalization. So instead of this state, so we will define the following states. And we will define a k now without the vector to be the square root 2 omega k and this k. And so this is the 2 square root of k a k dagger acting on 0. And now, if you compute the overlap with this k and k prime-- so now, you have square root k-- square root omega k, for one of them, and then you have square root k for two of them, and then you have 2 omega k. And then you have 2 pi cubed delta 3k k minus [INAUDIBLE].. So now you recognize this object from your pset. The omega k multiply by this guy actually have good Lorentz transformation properties. So this guy actually transforms nicely under Lorentz transformations. So this will be the normalization we will use from now on. So these states have very-- so indeed, you can show-- so that will be, I think, in your pset 3. You should look forward to it. So you can show if you act a Lorentz transformation-- so lambda is a Lorentz transformation, and u Lambda is the operator to generate that Lorentz transformation-- on such a state k, and then you just get lambda k. And lambda k is the Lorentz transformation acting on that k. It's the Lorentz transform the k. But yeah, you should see your maybe p set 3, so-- if I remember it. Good. Any questions on this? Yes? STUDENT: So I'm not sure in this case here why we assume that omega k is the same for both k and k prime? PROFESSOR: Oh, because you have a delta function here. Yeah, because omega k only depend on k-- it only depends on the spatial part. Other questions? Yes? STUDENT: Yeah, so the inner product of the k k prime is not Lorentz invariant. Is that just if you're listing one of the states? PROFESSOR: No, just this guy-- when you-- it's not about whether it's Lorentz invariant. It's not Lorentz covariant. Just when you transform it, just this object-- when you transform it, it's very awkward. Does not have very-- you can transform it. You can write down a transformation for this object. It just does not transform nicely. Then that means if this does not transform nicely, it means each of them don't transform nicely, and then just not convenient. So this one have the property that when I act u on this thing, and then actually, you can just directly corresponding to the state with Lorentz transform with the momentum. Yeah, and then it's very easy. And this property does not apply to this one. It does not apply to this k. OK, good. So you can also talk about a wave packet. So this is the plane wave. So this is like a plane wave. This is a momentum eigenstate. So we can also consider general-- say, single particle state will have the following form. So we have the following form. We can write it as psi just as a superposition of this k. Again, you only need to integrate over spatial momentum, because only the spatial momentum are independent. And then some arbitrary function of k, and then on this k. So any single particle state, you should be able to write it this way. So in particular, by choosing appropriate f k, you can construct a local wave packet. So you can-- so choose k-- you can choose f k to construct a localized wave packet in space. So again, we will have some exercises like this in your pset 3. And also earlier, you could have objected when I called this to be a two-particle state. Because each is a plane wave. So essentially, they are not localized. They cover all space. And in one sense, they corresponding to two particles. So now, you can solve this problem, say, by constructing using d k1 2 pi cube d k2. So by choosing appropriate function k1 k2, this k1 k2, you can construct two widely separated wave packets. Again, it's in space, so you can really talk about-- so these are the genuine two particle states. And so that confirms actually this k should be a interpreted as the plane wave version of the two particle states. Good. Any questions? Yes? STUDENT: So now that we don't have a position operator like we did before, what's the conjugate, I guess, operator to our momentum operator, or is there-- PROFESSOR: Good. That's a very good question. Indeed, there will be a problem in your pset 3 which asks you to show there's no eigenstate-- or there's no eigenstate -- there is no perfect. Yeah, so in non-relativistic quantum mechanics, you have this state. You have this wave function, so you can localize at the point, but there's no analog of such kind of state in the relativistic case precisely because there's no position operator anymore. Yeah, so you will explore this a little bit more later yourself. Just with this formalism, you can explore all these questions yourself. That's the key. Actually, you already have the power to do that. So now, let's say a little bit more on the structure of the Hilbert space. So, here the structure of the Hilbert space is a little bit special because there's no interaction. So you can just separate it into-- so the full Hilbert space, which I write as script H. Then, you can separate it into the vacuum state, and then you have the one particle Hilbert space, and then you have the two particle Hilbert space, and they're all-- they don't have anything to do with each other because there's no interaction. So in principle, you can have infinite number of them. You can have infinite number of them. So if you restrict to always a finite number of particles-- so the set of states-- so this is one particle, this is two particle, et cetera. So that's of states with a finite number of particles. It's called the Fock space. In the Fock space, we only have state of finite number of particles. One can have arbitrary number of them-- as large as you want. 10 billion, 100 billion, but it will be finite. So finally, those k-- either this k or this k, they are just plane wave normalized because they're normalize by delta function. So strictly speaking, they are not normalizable states. And so strictly speaking, they are not in the physical Hilbert space. So strictly, speaking k or k or any of those states of k-- or k1 k2 are not normalizable. They are only plane wave normalizable. So just like-- just as psi is equal to exponential ik x in non-relativistic quantum mechanics is not normalizable. So they're convenient for various mathematical operations, but they don't correspond to genuine physical states. So not genuine physical states. So physical state, we have to consider for example this kind of state. And then you choose f k so that is normalizable. So if we take this, and then-- so with psi, given by that, then the normalization of psi, you can calculate it, and then this is given by-- so you can easily guess the answer. So when you do the overlap exactly, you find that this given by the modulus of f k squared. So if you do the calculation to calculate the norm of that state, you find that given by that. And then we can choose this to be finite, and then this will be then will be a normalized state. So for this reason, it's sometimes-- we don't do it very often-- can choose just a basis of f x, say, of f i. So i runs some number. And then this alpha-- then you can define alpha i and then this will provide-- the alpha i will provide a basis of the single particle normalizable states-- a single particle state. So here is a fun fact related to the earlier question was asked. So naively, here, you have-- if you look at the k-- so k form a basis. Let's just look at the single particle state. So naively, this k forms a continuum-- uncountable continuum of basis of states. But once we impose this normalizable condition, means you need to choose the normalizable f. And then you can show the basis of this normalizable f actually is countable. So in the end, the Hilbert space-- the one-dimensional the one particle Hilbert space is actually generated by countable basis just like in your ordinary quantum mechanics. It's actually rather than uncountable basis like in k once you restrict normalizability. And so this is a fun mathematical fact. Good. Questions on this? So the last few minutes, we talk about conserved charges. Quantum-- the role of conserved charges. So classically, we say if an action is invariant under some symmetry-- some transformation-- some continuous transformation, say, of the infinitesimal form, so the alpha are the infinitesimal parameters. So the alpha label different transformations, and a label different fields. Alpha label different transformations. And then you will get the conserved current J mu labeled by this alpha, so alpha label differently for each. So alpha label different symmetries for each symmetry, and then you have a conserved current, which is satisfy the J mu alpha equal to 0. The alpha equal to 1, 2, et cetera-- label different symmetries. And then you can write down your Noether current. So we derive a general formula for the Noether current is given by the form, say, partial L-- partial partial mu phi a. And then f a alpha. So this is essentially the same as before. Essentially, I just add the index alpha now to label. You can have more than one symmetries. And then minus k mu alpha. So k mu alpha is the derivative-- total derivative -- suppose -- delta L is given by epsilon alpha partial mu k mu alpha. So the k mu alpha is the total derivative corresponding to that respect -- corresponding to that transformation. And then the q alpha, that will be the conserved charge for that current, so zeroth component. So this is the classical story. Now, I'm using a little bit more general notation for each alpha. So now, the key thing-- so now the key thing is that at the quantum level-- so at the quantum level-- first, as we already seen in the case of the Hamiltonian and the momentum operator, so q alpha is time independent-- is a constant operator. So this is the fact that this is conserved. And also another important property of q is that when you act q alpha on your field, if you look at the commutator between this q alpha on your field, it actually generates finite transformation. So turns out-- yeah, so if we have put a parameter here-- epsilon alpha. Yeah, so it doesn't matter for Q-- say, it's upstairs, downstairs. And then you get-- so you just generate the transformation for f-- for phi. So this can be shown explicitly. So you can show this explicitly. We're running out of time now. So you can show explicitly just using this expression. So let me just quickly outline the way to do it. So if you, say-- so let's look at the zeroth component of this J. So the zeroth component of the J just equal to partial L, partial, partial, 0 phi a and f a alpha. So let's consider the-- yeah, I think today we will not have time to finish anyway. Yeah, let's just leave this to the next time. Yeah, let me just say that the quantum level, this will generate the infinitesimal transformation. And then you can also exponentiate this Q. You can introduce, say, exponential i lambda Q. So if q is the conserved charge-- and then you can exponentiate Q like this. And then this operator then generates-- when it acts on phi, then generates finite transformations. So this takes you to phi a prime-- the finite transformations. Anyway, so in your pset, for pset 2, you actually check yourself a couple of simple examples, and you will see this works. And in the next lecture, we will give you a simple derivation to show this actually works. It works in general. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_12_classifiers.txt | Welcome to the lecture on classifiers. So this is the point in the class which we are going to change to our next major topic. So far we've talked about in the main regression problems where the target variable y is a real number or real vector. And now we want to switch to a different problem, and this problem is called classification. In classification, the target value- the target variable v is categorical. It can only take a finite number of possible values, and we will call that set script V. Now these problems of classification are- are treated in a very similar way to regression. In that we will have a performance metric of a loss function, have do empirical risk minimization with regularizes. But then there are also specific attributes of classification problems which distinguish them from regression problems. Uh, in particular, we'll see that the type of loss functions we use is different, that there is, uh, that there are particular types of error which can occur in a classification problem, that cannot occur, you know, in a regression problem, and there are ways of specifying the nature of the predictor for classification problems which are different from how we would do those in regression. We will also have a chance to talk about probabilistic regression, yeah, in a later section of the class. Now the set script v, its elements noted V1 to V capital K is called the label set. It's the set of possible values of V The VI called the classes or the labels or the categories, and when- when k is 2, this is called Boolean classification. In which case we usually think about script v as being true or false, positive or negative. Um, it's called multiclass classification. When capital K is greater than 2, when we have more than two classes. Uh, then our script v might be maybe yes, maybe no, it might be uh, places or countries, it might be languages, it might be the set of English words in some dictionary. It might be the set of possible orderings of M horses and erase, so that's the m factorial permutations of the numbers 1-n. Very often we just number our categories, and so instead, instead of thinking them as V1 through V capital K, we think of them as just the numbers 1 through k. So when we're predicting a categorical raw output V, given a raw input U, that's called classification. Um, we were talking about Boolean classification or multiclass classification, and the predictor is a map from script U. Our U is our set of possible independent variables to script V, the set of target variables. We would denote that by a capital G. Remember that in the regression section, we talked about little g as being the predictor and it was parametrized by Theta. But also remember that little g was the map from x to y, where x was an embedding of U and y was an embedding of V, and so here we have named the map from U to V capital G. And so v hat is capital G of u is our prediction of v given u, and then G is called a classifier. And one way to think about the classification is to think about which values of u give a result of g of u is equal to 1? Which values of u give a result of g of u is equal to 2 and so on, and if you think about that, then what we've done is we've classified the inputs u into K different categories, or K different classes, and those classes are mutually exclusive and collectively exhaustive. So here's an example. Here we have our set capital U, as a- so capital U will be uh, the square, the set of points, u_1, u_2. Where I think the absolute value of the components in this example is between minus 4 and 4, and our set V is equal to minus 1 or 1. We have two categories, one of which is minus 1, and the other one, which is 1, and in this plot you can see all of our data points, so here there's no split into training and test set. This is just the training set, and there are, I think 100 points here. And so we've got 100 data points, and for each data point, we have a record consisting of a u, which will be a point in the square, and a v, which will either be a minus 1 or a 1, and so we have 4i between 1 and n, where n is 100, U i is script u the square, and little v i is either minus 1 or 1, and we've denoted the points here. We've shown them in the plot, so that minus 1 here, yeah, uh, points with-points with for which V i is minus 1 are shown as red points, and points for which VI equals 1, are shown as blue points, and so these are 100 data points. Um, now what a classifier has to do is a classifier has to map every point in script U to either minus 1 or 1. So that if somebody comes along later and says, here's a new point in U, Tell me what value for v you predict for it. Give me v hat, well that's what the classifier does. And we denoted that in this picture by shading the square, some of it is shaded pink, those points are points for which we predict an outcome which is minus 1, and some points are shaded blue where we predict an outcome of 1. And so if somebody comes along with some new point, let's make it some new point right here, and then our prediction for that point would be that v hat is minus 1, if it's here, our prediction is that v hat is 1. And we are trying to learn that predictor, capital G, we tried to construct it. We're going to learn it from this dataset, and this particular predictor, we haven't talked about how to construct it yet. But this predictor, particular predictor has learned from this particular dataset. And you can see it, that's okay. Um, in particular, if we look at how well is it doing on the training set, well, there are points such as-these points where the true value of v i is 1 and the prediction is the true value of v i is minus 1, and the prediction is also minus 1, and there are points like these 1, where the true value is 1 and the prediction is also 1. But then there are other points like this red point right there, for that red point, the true value is minus 1 because it's a red point, but the predicted value is 1 because it's in the blue region. And so these are points for which our predictor is giving the wrong answer on the training data. And there were some points in the other which would make a different type of error. There are blue points for which we predict the outcome should be red, and the red points for which we predict the outcome should be blue. One way to think about this is to kind of regression, it's a kind of function fitting inside of our function, assigning a real number to every point, every possible value of U, a function is assigning a category to every possible value of U. That category, either being minus 1 or 1, the category being red or blue. This is a good picture to have in your mind when you're thinking about what is a classification problem? Got a bunch of data points. They have colors associated with them. And you want to shade in the plane so that the predictions corresponding to your shaded color match up with the true colors of the DataPoints. And of course this is convenient to think like that. And in two dimensions, of course the problem is kinda easy. We can simply shade in the plane to correspond to the colors of the points that we see and there's no need for us to ever get it wrong. But if I'm working in d dimensions and d is large, well then I can't hope to do this by hand and I have to, uh, uh, come up with an algorithm that's gonna do this for me. Let's talk about some applications of classification. Um, here's an example. Medical diagnosis, u is a bunch of patient attributes. So the fields in, uh, patient record, uh, test results, and age, gender, height, body mass, a whole bunch of other attributes of the person. Um, and then the Boolean V, the target variable, encodes some disease status, has a disease or not or maybe it's multi-class. Um, does the person have COVID-19, flu or just a cold? Um, uh, if we're doing advertising, then u contains the attributes of a person, the demographic data about an individual. And it also contains information about an ad that's being shown to them. And then V encodes whether they will buy the item, whether they're going to click on the ad, etc. Uh, fraud detection, so here u might contain the attributes of a transaction. Was it in-person, was it over the phone, was it online? What kind of credit card it was? Was it international? And then V is an attempt to, um, V, the categories that V can be are either that it's a fraudulent transaction or that's it's a valid transaction. And so one might have a whole bunch of data, um, where historic- where we have historical data of transactions and they are labeled as to either being fraudulent transactions or valid transactions. And we would learn from that data a predictor that when somebody makes a new transaction, it can be- the attributes of that transaction can be fed into the predictor and then the predictor would predict whether or not that transaction is fraudulent or valid. Image classification. Uh, here U is an image that would be an array of pixels and V would be categories of possible objects within the image. So we might be- be looking to, uh, categorize natural objects, lions and tigers and bears, uh, vegetation, trees, vehicles, buses and we might build a ca- classifier that you give it an image and it tells you what's in the image. And, uh, one can do simple things where one has an image which we know contains just one object. And the classifier has to return simply which object it is and which case script V is just a list of possible objects, or we might do more complicated things where the classifier has to return a list of all objects in the image. And that would be- and then in that case the script V would have to contain not just individual objects, but pairs of objects or lists of possible objects. Here's another one, spam filtering. This was one of the very first successes of the classification methodologies which we are discussing in this class, where, uh, u contains the attributes of a mail message, uh, the formal dress, the two address, and also a description of the text in the email message. So which words are in the text in particular? And then V, the target variables would, uh, uh, the categories would be either spam or ham. Ham, of course, meaning that it's, uh, a good email message. And back in the, uh, mid 1990s, uh, when email was becoming very popular, uh, the, uh, technique called Bayesian classification that we, um, may see later in this class, we will certainly see related techniques, was one of the first successful methods of distinguishing spam from ham. Another example application of classification is sports forecasting. Here u contains the attributes of a game or a match and team A versus Team B, so which teams are playing, attributes of the teams themselves. And V encodes which game? Which- which of the, uh, teams wins the game? Or possibly a tie. Uh, topic detection. So u as an article or a news item and V encodes the topic. So politics, sports or business, and so on. Uh, another example is sentence parsing, so u is a sentence and V encodes the grammatical parsing of the sentence. So that is a tree structure that encodes the relationships between the nouns, the verbs, the oj- the adjectives, the adverbs, and all the other possible parts of speech. So when we are measuring the performance of a classifier, ah, there's a very natural performance metric that we have, that is probably the most commonly used. And that is the idea of the error rate. So, ah, if we have a data set with u 1 through u n and v 1 through v n. So we have n records, for each one we have a u i and v i pair where u i lies in script U. And vi is the target variable, tells us which class u i belongs to. And then the predictions are given by v hat i, which is g of u i. And a prediction is correct if v hat is v, and it's wrong or it's an error if v hat is not v. The error rate E is the fraction of data points on which the predictor gave an incorrect prediction. So in other words, we look at all i from 1 up to n. We compare vi with v hat i and if they're not the same, well, that's an error and we count the number of errors and divide by n. Um, and this is the simplest possible performance metric for our classifier. And once we've computed it, we can use it to compare different classifiers. And in particular, we would do this not only on the training set but on the test set. Now when we're working on with Boolean classification, where the set of target variables, it has only two elements, we'll call them minus 1 and 1. Then we refer to the class v is minus 1 as the negative class and the class v is 1 as the positive class. And that allows us to use this very nice terminology about the possible outcomes. So we're going to be choosing v hat. That's what our predictor does for us. And we've got a true v, which is part of the training set. And if v hat is 1 and v is 1, well, then that's a correct prediction. And we would call that kind of prediction a true positive. If v hat is minus 1 and v is minus 1, well that's also a correct prediction and we'd call that a true negative. And then there are two different types of errors we can make. It could be that v is 1, but we predict that v hat is minus 1. And that's called a false negative or type two error. Or it could be that v is minus 1 and v hat is 1. And that's called a false positive or a type one error. [NOISE] And once we've constructed a predictor, we can construct a statistic which describes the performance of the predictor. And that's this thing called a confusion matrix C. It's a two-by-two matrix. And ah, its entries are the number of true negatives, the number of false negatives, the number of false positives, and the number of true positives. And when we look at this, we should realize that the first column here corresponds to v equals minus 1. The second one corresponds to v is equal to 1. And the first row corresponds to v hat is minus 1. And the second row corresponds to v hat is 1. So in the first row, we pick a v hat is minus 1. If we're in the first column also, then we've got a true negative, if we're in the second column, then we've got a false negative. We also refer to these entries individually as the number of true negatives, number of false positives to number of false negatives and number of true positives. Ctn, Cfn, Cfp, and Ctp. And these are numbers, not rates. So that if we add up those four numbers, they add up to n, which is the total number of data points. We have the total number of examples. We also have the number of negative examples, which we'll denote by N, with little n as a subscript. It's the number of true negatives plus the number of false positives. And the number of positive examples, which is the number of false negatives plus the number of true positives. And these a-a sums over a-the columns. So this-this sum right here, Ctn plus Cfp, that's equal to N n the number of negative examples. And this sum right here is equal to Np the number of positive examples. When we look at this matrix, of course, how do we get this matrix? Well, we have predicted G, and we can evaluate it on the training set. And that will give us a matrix C. And we can evaluate it on the test set. And that what gives R the matrix C. And these two matrices, these two confusion matrices, are measurements of performance. And when we look at them, well, what we'd like to see is large numbers on the diagonal. Because the diagonals are measurements of the number of times in which we've made correct predictions. And small numbers on the off diagonal, because all the off diagonal entries are incorrect predictions and that's what they're counting, and they're counting separately the two different types of incorrect predictions and the diagonals are counting separately the two different types of correct predictions. Now, the- ah, ah, there are some very ah, standard terminology that surrounds the confusion matrix that's worth coming over. Ah, in particular, the false positive rate is simply Cfp divided by n. And the false negative rate is Cfn divided by n. And the error rate is the sum of those two Cfp plus Cfn divided by n. Ah, there also, ah, there's also a whole bunch of other terminology that people like to use, often in very specific fields. Ah, so people talk about the true positive rate or the sensitivity or recall that Ctp every Np, it's the fraction of true positives, which we guess correctly. The false alarm rate, is Cfp on N n, the fraction of true negatives we incorrectly guess as positive. There's the specificity or the true negative rate, which is Ctn on N n. The fraction of true negatives that we correct-correctly guess. And there's the precision which is Ctp on Ctp plus Cfp, which is the fraction of our positive guesses that really are positive. And these are, well, they're used in different fields. Quite a lot of them are used in medicine. Some of them are used in specific branches of statistics. Um, we will- I think, never use things like sensitivity or specificity in this class. In machine learning, almost always people just look at the confusion matrix in the camps, which are readily interpretable and readily converted to one of these other measures, if you're working in the appropriate field. Um. Now, one of the things about looking at, uh, the confusion matrix is that it highlights for us that we have really two metrics for a Boolean classifier. We'd like to keep both the false positive rates small and the false negative rates small. And the sum of those two numbers is the error rate. Um, keeping the error rates small is fine, um, but sometimes it's not what you want out of a classifier. Um, so for example, if you have, uh, uh, a manufacturing line, you have a camera that's looking at the, um, objects that are coming off the magic manufacturing line and is trying to determine whether or not they are correctly manufactured or incorrectly manufactured. Now if you've got very few errors in your manufacturing process, then a huge percentage of those, uh, manufactured objects will actually be currently manufactured. And only a very, very small percentage will be incorrectly manufactured. Classification is often difficult for, uh, such manufacturing problems. And very often the predictor that minimizes the error rate is simply to predict that all of the objects coming off your, uh, production line are actually correctly manufactured. Um, however, that's- that`s completely useless, of course, because it's giving you no information. What you are much more interested in in such a situation, is to be able to detect those bad objects, those incorrectly produced objects. And so one would have a classifier there, that is willing to incorrectly classify a few good objects as bad, as long as it caught most of the bad objects. And so there, one is quite happy to accept a few false positives. Positive here corresponding to bad objects in exchange for a very small number of false negatives. For where the false negatives there would be incorrectly assuming that a bad object is good. This is a very common situation that the thing you're trying to detect is rare and so choosing a classifier that minimizes the error gives us an uninformative classifier. And it's much better to try to choose a classifier that focuses on either false positives or false negatives. Now we still need to be able to compare classifiers. And it's very often convenient to have a single metric by number. And the way we do that, is we combine them with a weight. So and that- that's called the Neyman-Pearson metric, ENP. It's Kappa times the false negative rate, plus the false positive rate. And so here Kappa is some positive number. And it sets how much we care about false negatives compared to false positives. So if Kappa is very large then our Neyman-Pearson metric, uh, is affected very much by false negatives and not so much by false positives. If Kappa is very small, then, our Neyman-Pearson metric is large when we've got a large number of false positives. And not so much when we`ve got a large number of false negatives. And if Kappa is 1, well, then the Neyman-Pearson metric is just the sum of the false negative rate and the false positive rate, which is the error rate. Uh, let's look at this more conveniently as a trade-off graph. Um, if you pick any classifier, you designed it using whatever your favorite method of designing classifiers is, and we haven't told you how to do that yet. And then you say, well, okay, I've got these- I've got a classifier and I can compute its false negative and false positive rate on- on the test set. And that- those are going to give me two numbers. And I can plot those two numbers on this plot right here. So on the horizontal axis here, I've got the, uh, the false negative rate and the vertical axis, I've got the false positive rate. And so you give me any classifier, I have some particular point. There's one, there's one, here's another. And so different classifiers correspond to different points on this, uh, on this plot. Now one thing that's worth observing on this plot is that there were some classifiers that you would just never use. One is G_3. And the reason you would use never- you would never use G_3 is that, if you compare it with G_2, our G_2 does better in terms of the false positive rate and the false negative rate. And so G_2 wins on both counts. And if you're interested, even if you're more concerned about one of those measures, then the other one, you'd still use G_2 over G_3. If on the other hand, we compare with G_1. Well, if I compare G_1 with G_3, well, G_1 does better in the false negative rate, but worse in the false positive rate, and so I might well use G_3 instead of G_1. But if I've got G_2, I would use G_2 over G_3. Now if I compare G_1 and G_2, well, that's not so easy because G_1 has a smaller false negative rate, but a larger false positive rate than G_2. And those two classifiers are incomparable. Choosing between them is up to you, the designer of the system. You may be more interested in keeping the false negative rate small. If you're more interested keeping the false negative- negative rate small, then you should choose G_1. If you're more interested keeping the false positive rate small, then you should choose G_2. If you have G_2 available, then you should never choose G_3. So when you look at a classifier on this plot, the important classifiers, uh, one, where are- are those for which there is nothing better in both false positive and false negative. So if I look at a particular classifier and say let's look at G_2, and then if I draw here the region of the plane consisting of values for the false positive and the false negative rate, which are better than those achieved by G_2, if there's no classifier in that region, then it's a good classifier. Conversely, if I look at G_3 and I look at the region consisting of better performance, well, there are classifiers in that region. So G_3 is a bad classifier. Now the good classifiers have a name. They're called Pareto optimal. They`re good because no other classifier is better in both false positive and false negative rates. The set of all of these Pareto optimal classifier, so the Pareto optimal classifiers are these red points. So the set of all Pareto optimal points is called the operating characteristic, or the ROC. ROC stands for receiver operating characteristic. This is a term that goes back to the design of radar systems in World War II. And almost nobody calls it the receiver operating characteristic anymore. Almost always it's just referred to as the ROC. And so what we commonly do, is we develop many different classifiers and we plot them on this plot, we rule out the ones which are bad, which are not Pareto. And then out of the ones that are Pareto, we get to make our choice. Now when we've got the Neyman-Pearson measure, that's Kappa times the false negative rate plus the false positive rate. Well, what that's doing is it's measuring performance in a particular direction in this plane. And so when I think about this, I'll call- uh, I've got the false negative rate and the false positive rate. And I can- if I've got a particular Kappa, I can draw a vector. There's a vector, and that vector is in the direction [NOISE] Kappa 1. And so I can think about my false- my Neyman-Pearson metric as Kappa 1 transpose, uh, C_fn on n, C_fp on n. Now because we're using the vector Kappa 1, multiplied by the vector of coordinates in this plane, what that measures is it measures the distance from the origin in the direction Kappa 1. And so, if we look at the set of all points which are- for which Kappa 1 transpose multiplied by the coordinates of that point is equal to say, 1, that would be a line here. And then if we look at the set of all points for which that linear combination Kappa 1 transpose multiplied by the coordinates is equal to 2, it would be a different line, and so on. And so by minimizing Kappa times the false-negative rate plus the false positive rate, we are trying to pick the point that is on the line of these lines, which is closest to the origin. In this particular case, the point is G_2 for this particular value of Kappa shown here. And these lines have a slope of minus K, minus Kappa. And so by varying Kappa between 0 and infinity, we have slopes. When Kappa is 0, we have uh- uh- a vector pointing in this direction, and when Kappa is very large, we have a vector pointing in this direction. So we can pick Kappa, uh, according to our preference. If we make Kappa very large, then we're going to try and minimize. We're going to try- we're going to have slope lines that are like this. And we're going to try- we're going to be concerned about performance, where the best thing one can do in order to make the performance measure small is to choose a predictor that has a low false negative rate. If instead, we use a Kappa which is very small, well, then Kappa 1 looks a vector like this, and we're going to have corresponding lines in- in this direction. And the best thing we can do to make that performance metrics small is to pick a predictor for which the false positive rate is small. Here's an example. Um, here we have, uh, the same dataset that we showed before. We have blue points and red points. The red points are minus 1, and the blue points have target variable 1. We can see there are some false negatives. The false negatives are blue points for which the classifier would predict red, so where is the false negative? There's one false negative right in there. Underneath there, there was a blue dot which I will now make visible. There it is. So this blue dot there is in the red region. It's in the region which our classifier will predict red. That's a false negative. And then there are many false positives, right? All of these red dots are points which we would predict- predict as blue, we'd give a positive prediction, but it's a false positive. If you count the number of red dots the false- in the blue region, that for classifier number 1, that turns out to be 16. And we've already seen that there's one blue dot in the red region. And so this here is our false negative, this here is our 16 false positives, this is our- is our correct predictions of minus 1. So these are red dots in the red region, and there were 24 such red dots. And these are blue dots in the blue region, and there are 59 such blue dots. Now if we- on this particular test, uh, on this particular example, there are 100 points. And so, uh, the false negative rate is, uh, 0.01 and the false positive rate is 0.16. And we can plot those. Um, false negative rate is 0.01, false positive rate is 0.16. Well, there's 0.01, there's is 0.16. That's predictor 1, right there. We can look at predictor 2, this is a different predictor. You can say it's got different regions. And, uh, this predictor is, uh, encompassed more. So when you look at these- these setup dataset, these two data points, you can see that it's largely- uh, let me erase that. It largely consists of blue points going across the middle here and red points going this way. And so our predictor here is focused on getting the blue points covered by blue region. And unfortunately, it's going to have some red points covered as well. And that's why it's got a large number of false positives because the- uh, the red points have been classified as blue. Here we've said, well, we don't want that anymore. We would like to balance the- um, the- um, red and blue arrows. And so, we've got a predictor that is classifying more of the red points as actually red. More of the region in which the red points are, it has a closer corresponding prediction of red. As a result, we can look here and we can see, well, now there are eight false negatives. There's one, um, but there's only eight false positives, here's one. And that gives us, uh, corresponding false positive and false negative rates of 0.08, which is right there. Here's a- uh, plot number 3 is a classifier at the other extreme. Here, we've tried to focus on getting the red points correct. And in exchange, we're getting some of the blue points wrong. What does that mean in terms of false positives and false negatives? It means that we've got very few false positives and a large number of false negatives. Um, our false positives are red points that are classified as blue and there are two of them. Here is one of them. Um, and we've got a large number of blue points that are classified as red. There's one of them. Our resulting, uh, counts, uh, there are 23 false negatives and two false positives. And that gives us this classifier right there. So there are three classifiers, um, and which one you pick out of those three, that's up to you. I think there is no way of choosing between these three classifiers. Number 2 beats number 3 in false negative rate. But does worse in false positive rate. Number 1 beats number 2 in the same way. Um, and neither one of them- we would say neither one of them dominates the other. And so this is a design question. It depends on the particular problem that you're looking at, whether you're interested in a classifier that has smallest overall error rate. Let's look at what the overall error rates are. So this one has an error rate of 17, this one has an error rate of 16, and this one has an error rate of 25. And those of course error counts, error rates up 0.17, 0.16 and 0.25. So the one with the smallest error rate is number 2. The one with the smallest number of false negatives is number 1, and the number- one with the smallest number of false positives is number 3. And when you look at multiclass classification, well, there are no longer just four possible values for v-hat and v, there are K squared possible values because there are K classes, v_1 through v_K, and, uh, and each of v-hat and v can be in any one of those classes. So K squared possible outcome pairs. Um, when we say v-hat is v_i and v is v_j, it means the true value is v_j and we're predicting v_i. So the only ones that are correct are when v_i is v_j, and it's incorrect when v_i is not v_j. In other words, if v-hat is v, that's correct, and if v-hat is not v, that's incorrect. So there are K different ways in which we can make the same- make a correct prediction. If, uh, we predict v_1 and the truth is v_1, that's correct, if we predict v_2 and the truth is v_2, that's also correct. Um, but there's also a bunch of different errors. If we predict v_1 but the truth is v_2, well, then, uh, that's an error. But there's a different error where the truth is v_2 and we predicted v_3. So there are K times K minus 1 types of errors, one for each possible pair where i is not equal to j. So we have a K by K confusion matrix. And the ijth entry of the mat- confusion matrix C is the number of data records where we guessed- we predicted v-hat was v_i, but actually the true v was v_j. Um, uh, and you should be aware that sometimes, uh, people transpose this, so they swap i and j, and, uh, uh, of course, that's just a- a convention. Um, of course, the entries in C add up to n number of data points. The column sums give the number of records in each class. Um, C_ii is the number of times we predicted v_i correctly. If i is not j, then C_ij is the number of times we mistook v_j for v_i. Now these quantities C_ij, where i is not equal to j, if we normalize them by the number of records n, that's called the error rate for those particular pair of classes. E_ij is the number- is the fraction of the data points on which we mistake v_j for v_i, and the overall error rate is the sum of all of the different error rates. So here's an example. Here we've shown data points, uh, which have target value; 1 is red, 2 is green, and 3 is blue. And we've got a classifier that- that classifies points in the plane as either red, green, or blue. And you can see that it's doing okay, uh, the, uh, green region mostly contains green points, the blue region mostly contains blue points, and the red region mostly contains red points. There are a few, uh, cases where we have green points but they would be predicted as red. Here's 1, 2, 3, 4, 5. So there's green points that we would predict as red, and there's a blue point that we would predict as red as well. That's this one right here. And so there's one of that type of error as well. Um, what other types of error are there? There's a red point that we are predicting as green. There, that's this error. And there's a green point that we're predicting as blue. That's this point right there, and that's this error. All the other points are correctly classified. We've got 39 reds that we're classifying as red, 34 greens that we're classifying as green, and 17 blues that we're classifying as blue. Now to take the error rates, well, there are 100 points, so I simply take all the non-diagonal or the off-diagonal entries of the confusion matrix divide them by 100 and that gives me the error rate matrix there. And if I sum up all of these different errors, I'll find this 10% error rate. Now, just as in the Boolean case when we want to, uh, pick a classifier, it's very often convenient to have one number that measures that is our performance metric. And the way we would do that is we would use our weighted sum of the different types of errors. Now you could attach a- a weight to all of the different cat- categories of errors. So in our, uh, three class example, there are six possible types of errors. And so you can have six different Kappa values and use those six different Kappa values to weight the different errors. And that would be a perfectly fine, um, performance measure. In fact, we don't do that. What- instead, we tend to do is we tend to, uh, use a Kappa value, which is only defined per column, per true class. And so we'll have one weight which is associated with these two errors, one weight associated with these two errors, and one weight associated with those two errors in the last column. And so associated with each column, there is the error rate, which is- we've denoted by E_j. It's the number of times we mistook v_j for some other class. And we could divide by n and get the error rate as well as the error count. And then we take a weighted sum of these three error rates. And that's the Neyman-Pearson error. So Kappa j is then how much we care about mistaking v_j for something else. And if Kappa j is 1 for all of the different i's, then the Neyman-Pearson error is the error rate; we're just summing up all the different errors. So to summarize, in this section, we've talked about how you measure the performance of classifiers, both for Boolean classifiers and for multiclass classifiers. We haven't yet talked about how you find classifiers. And, uh, uh, that's coming in the next section. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_7_constant_predictors.txt | Hello and welcome to the section on constant predictors. Now the idea of this section is to explore the simplest possible predictor. That's the constant predictor. What constant means here is that, instead of g, Theta of x, depending on x, g Theta of x totally ignores x. It always returns the same y hat, the same prediction. And so we'll just call that Theta. That will be the parameter that determines the predictor, and the parameter therefore determines the prediction. And it will be a vector in Rm. The purpose of looking at such a simple predictor is that it gives us an understanding of what the loss means. And we will look at a number of different losses and be able to get some understanding of what it means to have a prediction that minimizes those particular losses. Another way to think about what we're doing in this section is we're looking at a linear regression model where the features are the simplest possible features. Every data element has just given feature Phi of u is equal to one. Which is nice. It doesn't depend on u. And of course, we don't even need u. How are we gonna do this?. Well, we're gonna use ERM empirical risk minimization to fit Theta to the data. And because there's no sensitivity, we're going to have no need for regularization. The predictor is completely insensitive. So as we'll see, different losses lead to different predictors. So we're gonna have data, y_1 through y_n. Each one of those y's will be either a scalar or a vector in Rm, and we'll have a loss function which will take two y's, a y hat and y, and return us a real number. Um, and I think in every one of the predictors we analyze today, we're going to have m is equal to 1. So all of the y would in fact be a scalar. Let me write that on the slide, m is equal to 1. So our loss function L of y hat, y quantifies how bad the y hat approximates y. And we've seen a few different losses so far in this course. Um, so for scalar, y, we've seen the quadratic loss, which is y hat minus y squared. We've seen the absolute loss, which is the absolute value of y hat minus y. We've seen the fractional loss, which is the max of y hat over y minus 1 and y of y hat minus 1. And that's the percentage error once we scale it by 100. Of course, the fractional loss only applies when both y and y hat are positive numbers. And if we were looking at m greater than 1, where we had a vector y, we might look at the quadratic loss as the norm squared of y hat minus y. So we're going to choose now a predictor with no data. So our predictor just returns Theta, and I'm gonna choose the Theta to minimize the empirical risk, and that's going to be the average of the loss of Theta and y_i. Averaged over all the data elements y_i for i is 1, up to end. And we'll-we'll solve it in these particular cases and in some other cases as well. And we'll see that the result that you get is actually very interpretable. Now one of the features of losses that make them tractable and make them-make them-make it possible for us to determine either analytically or numerically exactly what the optimal theta is, is convexity. So here we have a function f mapping from Rk to the real numbers. So this is a real valued function in a k dimensional Euclidean space. And such functions we're going to call it convex if they satisfy the following inequality. And that is that for all points w and z in Rk, and for all Alpha in the interval of 0-1, if we evaluate f at Alpha w plus 1 minus Alpha Z, that's less than or equal to Alpha times f of w plus 1 minus alpha in f of z. Now this inequality-this expression has a nice meaning. Let's look at it. Here, we have two points, w and z, and they live in a k-dimensional space, and what this-this expression does, Alpha w plus 1 minus Alpha z is it's a parametrization of the line segment connecting z to w. And when alpha is 0, well then, Alpha w plus 1 minus Alpha z is just z. When Alpha is 1, it's just w. When Alpha's a quarter, we find that Alpha w plus 1 minus Alpha z is exactly one quarter of the way along the line from z to w. So this expression, Alpha w plus 1 minus Alpha z is called a convex combination of w and z, and it's simply a way of parameterizing the line segment between two points. On the right-hand side of this expression, you will notice we have a very similar convex combination, but this is of two real numbers, f of w and f of z. And so we're parameterizing the line segment in one dimension from f of w, f of z. Now what this means is the following. If my point w is say, at 0.1 and my point in z is say, at 0.8, well then f of w is somewhere over here, and then f of z is somewhere over here. And this inequality means that this line segment lies above the function. And if for any two points that I pick z and w, the line segment joining the corresponding points on the curve lies above the curve itself, then such a function is called convex. And over here on the right, we see an example where that fails. Here are two points. Here is the line segment. We can see that it falls underneath the graph of the function. And so this is not a convex function. So this means that the function has to curve upwards, another way to say that is it has positive curvature. It has to curve upwards everywhere. And if we have a differentiable function, well, convexity can be expressed in terms of the curvature directly. The curvature of a differentiable function is the second derivative, and so this expression is exactly equivalent to the requirement that the second derivative of f at all points w has to be greater than or equal to 0. And then another way of characterizing it as well, and that is to look at the first derivative of the function f, and a function is convex if and only if, its first derivative is non-decreasing. In other words, as you increase w, then the gradient of f can't decrease. That's true. Those two conditions for derivatives are true for functions from the reals to the reals. And in particular, the second condition, that the second derivative has to be non-negative can be generalized in a straightforward way to functions on R_k, but we won't need that here. It's also worth pointing out that the notion of convexity defined by this inequality doesn't have any requirements that the function be differentiable, and it's quite reasonable to look at functions which have kinks in them and which are- so this is a function which is l- which is linear on a region, linear on another region and then has a curve for the third part and that's still a convex function. It's simply not differentiable at this point or at this point. Now, convexity as it- is a very important property when one is trying to solve minimization problems or more general optimization problems. And this is for this following property. And that, if I've got a differentiable function, then a point w is the minimum of that function if and only if the gradient of f at w is equal to 0. And when you read this expression, it's tempting to confuse this with similar expressions that you've seen before in your calculus class, where one is trying to find the minimum or the maximum of a function and one looks for stationary points. Now, the trouble with stationary points is that they might be a minimum, they might be a local minimum, they might be a local maximum, or it might be a saddle point or something else in higher dimensions. Here, this is not just a local minimum, this is a global minimum. This is the true meaning of the word minimize, not the colloquial meaning of the word minimize. This is w that actually finds the global minimum of the function f, and so one can find the global minimum simply by looking for points where the derivative is 0. Now, for convex functions on the reals, so that is one-dimensional convex functions, we can characterize explicitly conditions under which we have a minimum point. Now lets look at that, and we plot such a function. And here I have a function which has a point at which it's not differentiable, but yet that point is clearly the minimum. Now I can simply take the derivative because the function is not differentiable there. However, for a convex function, there's always a left-hand derivative and a right-hand derivative. What that means is that if I look at this point w, there's a slope to the right and a slope to the left, and this slope to the left is labeled f dash minus of w, and this slope to the right is labeled f dash plus of w. They're defined in a way very similar to the way we usually define derivatives. This left-hand slope is the limit as t tends to 0 of the slope f of w plus t minus f of w divided by t, but we're allowed to take the limit only over negative t. And similarly the right-hand slope is a limit taken only over positive t. Now, in terms of these two derivatives, the left-hand and the right-hand derivative, w is a minimum if and only if left-hand derivative is less than or equal to 0 and the right-hand derivative is greater than or equal to 0. Even if f is not differentiable, we will have both the left-hand derivative and the right hand derivative, and so this is a condition we can use for any convex function f. If the function is derivative- if the function is differentiable, well then its left-hand derivative and its right-hand derivative will both be the same. The slope will be the same whether we approach the point from the left-hand side or from the right-hand side, and both of these will equal the usual derivative. For our simple example, we might look at the absolute value function. The absolute value function looks like that and we can see that if I look at the point w is equal to 0, well then f dash minus of 0 is minus 1 and f dash plus 0 is 1. Clearly the left-hand derivative is nega- is negative and the left- and the right-hand derivative is positive and therefore w is a minimum. Now for many other loss functions of interest in machine learning, the loss function itself is a convex function of the prediction y-hat. Certainly the ones we saw on the previous two slides, all of those loss functions are convex in the prediction. It's also true that if you take a convex function and another convex function and add them up, well then the sum of those two convex functions is itself convex, and if I scale a convex function by a positive number, well then I get another convex function. And as a result, I can take the average of all of the convex functions in the empirical risk, L of Theta, and get another convex function. So the empirical risk is a convex function of Theta. And so by using the optimality conditions on the previous slide, we can characterize exactly when Theta minimizes the empirical risk. All we have to do is look at the left-hand derivative and the right-hand derivative and check whether they have the appropriate sign and that will tell us when theta is a minimum. So first let's look at the simplest case, the case of square loss. Of course this function is convex, but it is also differentiable. Then the loss l of y-hat and y is the norm of y-hat minus y squared. In the case of a scalar y and y-hat, that's just the square of y-hat minus y. And empirical risk is just the mean-square error. 1 over n, the sum over i, Theta minus y_i squared. Because here we have constant predictor g Theta of x is just equal to Theta. Now this is a simple least squares problem. We can simply differentiate the objective with respect to Theta, and we'll find that the optimal Theta is 1 on n times the sum over i of y_i, which is the average of the y_i's. This is, ah, the best constant predictor when we're using the square loss. It's the average or the mean of the data. The resulting mean-square error is the variance of the data. As an example, here I have [NOISE] a bunch of data points. One here, one here, one here. And, ah, the mean of those data points is given by this red line, which is about 1.12, something like that. If I plot the loss function as a function of Theta, this is this curve here, and the minimum of that curve is exactly at the mean. This loss function is a sum of- is an average of the square loss applied to each of those different points. So we've actually constructed this loss function by taking one on n times the sum of 1, 2, 3, 4, 5, 6, 7 square functions, each of the form Theta minus y_i squared. Now let's look at the absolute loss case. Here we have the- the loss function l of y-hat, y is the absolute value of y-hat minus y. And so the empirical risk is the mean-absolute error. Now for this, we've got, ah, an empirical risk, which is the average of a bunch of absolute value functions. It's certainly a convex function because the loss, the absolute value of y-hat minus y is convex in y-hat. It's piecewise linear, but it's not smooth. It's not differentiable at every point. It has kink points at the data values. And we will actually see that the Theta that optimizes this empirical risk, the minimum- the Theta that minimizes the empirical risk is the median of the data. And this is a very reasonable way of making an approximation of the data. So here, it's the same set of data points. And what we can see is that this function is actually piecewise linear. If we look at these points right here, these are the kink points. Let me make those a little bigger. These are kink points in the graph. Between these points, the graph is a straight line and at those points, there's an abrupt change in the derivative of the function. The- here we have 1, 2, 3, 4, 5, 6, 7 data points, and so if we sort those data points, the median is the fourth data point. It has three points to the left and three points to the right, and that gives us this median value right there, which is precisely the value which minimizes the empirical risk. Let's take a look at this a little bit more closely. We want to actually, first of all define the median. And that's not quite as straightforward as it- at first appears. Um, if we have an odd number of data points, well then the median is the middle point. If we have an even number of data points, well then we have to allow ourselves the possibility that any to any point, which is- which has half of the data points to the left and half of the data points to the right might be reasonably considered a median. Let's first of all write that down mathematically and then, um, do the analysis. So the simplest- let's look at an example first. Suppose we have a data y_1 through y_n. Uh, if n is odd, then the median is simply the middle point, which is y_n plus 1 over 2. And that's completely well-defined. If n is even, well then we say Theta is a median. If Theta is anywhere between y_n on 2 and y_n over 2 plus 1. The median is not unique. So if I have these four data points, minus 3.3, minus 1.7, 0.4 and- well, there's the three data points and the median is just the middle one, which is minus 1.7. If I have four data points, so I add a new data point at 4.9, well then the median is any number between minus 1.7 and 0.4. Now to characterize the median precisely, we will define two quantities. Uh, the first of which is going to be called n_1 and it's a function of Theta and it is the number of data points strictly less than Theta. The second one is n_2, also a function of Theta, it's the number of data points strictly greater than Theta. And Theta is a median of the data if n_1 divided by n, the fraction of data points strictly less than Theta is less than or equal to half. And n_2 divided by n, the fraction of data points strictly greater than Theta is also less than or equal to one-half. Now, if Theta is not equal to a data point, then there are a number of data points strictly less than Theta. And the number of data points strictly greater than Theta are actually related of course. They add up n_1 plus n_2 is going to add up to n and so both of these conditions collapsed now to one condition, which is just that n_1 divided by n is a half. The number of data points less than Theta is a half, implies that the number of data points greater than Theta is a half. If Theta is equal to a data point, well then we need two conditions, not just one condition to characterize the median. Because there may be a certain number of data points equal to the value of Theta. Now we can use this characterization in order to show that the median or a median, because the median is in general not unique, minimizes the empirical risk when we're using the absolute loss and it is the case that there may be more than one minimizer of the risk. And at every one of those risks, those minimizers is a median. And conversely that you pick any median of the data and that will be a minimizer of the risk. How do we see this? Well, let's first of all, just assume that data is sorted. So we'll have y_1 less than or equal to y_2 all the way up to y_ n. Of course that doesn't make any difference to the problem, it doesn't change the empirical risk because the empirical risk is just the average of the loss evaluated at those points. So we can order them any way we want to. Let's evaluate the empirical risk. The empirical risk is the sum over the data of the absolute value of Theta minus y_i, all divided by 1 on n. For those data points for which Theta is less than y_ i. That is- the absolute value of Theta minus y_ i is minus Theta minus y_ i. For those data points, where Theta is greater than y_i the absolute value of Theta minus y_i is just Theta minus y_i. So we split up that sum into those two categories. So first of all, we sum over the points for which y_ i is less than Theta. Those are the first n_1 data points. And then we sum over the points for which y_i is greater than Theta. And those are the last n_2 data points. There may be some other data points at which Theta is equal to y_i. Those contribute 0 because the loss of those points is 0. And so they don't enter into this sum. If there aren't any such data points, theta is not equal to a data value. Well, we can differentiate this sum. Differentiating this with respect to Theta is easy. We get n_1 on n minus n_2 on n. Um, however, we actually need to find the minimum point. And it may be that the minimum point of this function; the other feature is at a kink point, at a point where the function is not differentiable. And that may or may not be the case. We may be in the case where the function looks like this. In which case the minimum is any point here. Or we may be in the case where the function has a kink in it, in which case the minimum is there. Now if we're at a point- if we're at the case where there's a kink in that function and the minimum is at the kink. Well, now we can simply differentiate L of Theta. And so what we need to do is look at the left-hand and right-hand derivatives. To do this, we will assume, first of all, that Theta is just to the left of a data point. Now, when Theta is just to the left of a data point, well suddenly there's no possibility that there is a data point at Theta. And so n_1 and n_2 are related because n_1 plus n_2 equals n. And so this- I can evaluate this expression knowing that n_2 is equal to n minus n_1. And then I can differentiate it. And when I do that, I end up with this expression right here for L- minus Theta. And I can do exactly the same thing when I'm looking at the right-hand derivative. And when we look at the right-hand derivative, I can substitute and again n_1 plus n_2 is equal to n because I know if Theta's not at a data point. And here we've chosen to eliminate n_1, rather than eliminating n_2 and so we get a slightly different expression for L - plus. Now, theta's optimum means that L - minus a Theta is less than or equal to 0 and L - plus of Theta is greater than or equal to 0. And those two conditions come immediately from here. Uh 2n_1 over n minus 1 is less than or equal to 0, means n_1 over n is less than equal to a half. Conversely, the right-hand condition becomes n_2 over n is less than or equal to half. And those are the precise conditions under which Theta is the median. And so we've shown that Theta is the median is equivalent to Theta minimizing the empirical risk. [NOISE] And I want to turn to a different loss function and we're going to construct it, using this function, the tilted absolute value function and what it is, is it's a parameterized family of functions. It has a parameter tag in it, which must be between 0 and 1 and then it gives us a penalty function, we will use it to penalize the difference between the prediction Y hat and the actual Y, so U here corresponds to Y hat minus Y and we're going to have P of U be either minus Tau times U, when U is less than 0 or 1 minus Tau times U when U is greater than or equal to 0 and it is exactly what the name suggests. When Tau is a half. It's like an absolute value function, just scaled so, it's equal to one half the absolute value function. If we increase Tau, then we see that it tilts and it is larger for negative view, than for positive view and if we decrease Tau, then it tilt the other way and it becomes larger for positive view than for negative view and there's a nice expression for it, which is explicit in a voiced the case is, the half minus Tau times U plus one half multiplied by the absolute value of U. Now, if we use it as a loss function, well then we've got the tilted absolute loss, which is the loss of y hat Y, is P Tau of Y hat minus Y and so the risk will be the average tilted absolute loss. Now. This function L of Theta, the risk is convex and it's piecewise linear because the total absolute loss is a convex piecewise linear function of Y hat and when we take the average of such functions, we get another one and it turns out that it has kink points at the data Y_1 to YN. Now, if Tau is less than a half, well that means that when Y hat, is less than Y, we are going to have a small value or the penalty function compared to when Y hat is greater than Y, and so it's going to be worse to overestimate than to underestimate, we're going to prefer to make underestimates. For Tau greater than a half we prefer, to overestimate and it's worse to underestimate and so there are situations where, one would prefer to make an overestimate, than an underestimate. If we're measuring the quantity of something dangerous, we would prefer to overestimate that quantity rather than underestimate that quantity and we'll, so we'll see that theta is optimal, if it's a Tau quantile of the data and that means that roughly a fraction of the Y's less than Theta is around Tau, so here's the empirical risk. We have here a collection of data points and the risk function has kinks in it, directly above the data points and in between those data points, it is piecewise is linear and so these segments joining the data points are straight lines and the optimal Theta, it's right here at a data point, it's at 0.5 and this is where tau is 0.25 and we can see that this loss function, is going to increase like this, it will get steeper and this loss function actually turns out, it doesn't get any steeper at all and because these are the only data points that we have, we've got we know that this loss function continues straight, and this loss function continue straight and we can see that there's a penalty for underestimating, a penalty for overestimating, and the penalty for overestimating is greater, than the penalty for underestimating. So just like we did for the medians, we need to define carefully what a Tau quantile is and then we'll see that the Tau quantiles are the things that minimize the empirical risk, so, for Tau between 0 and 1 theta is a Tau quantile, if n_1 on n is less than or equal to Tau is less than equal to 1 minus n_2 on n, so, remember n_1 is the number of data points strictly less than theta, so n_1 on n is the fraction of the data less than Theta and n_2 on n is the fraction of the data greater than Theta and so 1 minus n_2, on n, is the fraction of the data less than or equal to the predictor Y hat plus Theta. Now, if theta doesn't equal any of the data points, well then n_1 and n_2 are related, because n_1 plus n_2 is then going to equal n and so these two inequalities reduce to one inequality and actually it was used in an equation, because we have n_1 on n is going to be less than or equal to Theta and the other inequality is going to reduce to n_1 on n is greater, is less than or equal to, is greater than or equal to Tau so, we're going to have two inequalities that have the opposite sign and as a result the, those two inequalities boil down to Tau is n_1 on n. The fraction of data points less than Theta is equal to Tau. If you're at the data point, then you have to be careful to account for the number of data points equal to Theta. Quantiles have names, uh, one of them we've seen already when Tau is a half is the median. Another one are the quartiles, Tau is a quarter Tau is a half Tau is 0.75 , the deciles, Tau is 0.1 through 0.9 and the percentiles. Let's look at some examples of quantiles. Here we have a plot. On the left, we see the Tau quantiles and on the horizontal axis we see Tau. We've got five data points 4, 7, 7, 8 and 9. Let's just mark those. 4, 7, 7, 8 and 9. And if we pick a Tau of 0.1, then that's a unique quantile, Theta is equal to 4. If we pick a Tau of 0.2, then there's a range of quantiles between 4 and 7. Any number between 4 and 7 is a 0.2 quantile. Here at 0.5, the corresponding 0.5 quantile is 7. Now, the Tau quantile minimizes the empirical risk when you have tilted absolute loss. And this is exactly like the argument we used in the case of the median. Let's say this precisely, we'll say that Theta minimizes the empirical risk if and only if it's one of the Tau quantiles. And here the empirical risk is defined with the tilted absolute loss, and the tilted absolute loss has parameter Tau in it. And the argument goes exactly the same way. We assume the data is sorted and then the loss has n term in it. Each one has the form p Tau Theta minus y_i, but those expressions for p Tau depend on whether Theta is less than y_i or Theta is greater than y_i. And so we split that sum up into terms for which Theta is less than y_i and the terms for which Theta is greater than y_i. Now, if Theta is not equal to a data value, we can just immediately differentiate this with, um, respect to Theta. And we find this nice expression. We can, uh, uh, uh, evaluate this to find out whether or not we're at a point where Theta is, uh, L dash of Theta is 0. But the- if we're looking to check whether or not we have a minimum, we know that this function L is not differentiable, and so that's not a sufficient test for minimum- for Theta being optimal, and instead we need to look at the left and the right derivatives. And we do the same trick as before. We consider a point. And in order to evaluate, uh, the derivative, the left-hand derivative, we'll evaluate the left-hand derivative at a point Theta slightly to the left of the point. If we move slightly left of the point, then n_1 doesn't change, but n_2 does because n_2 increases by the number of points actually at that minimum. And so we eliminate n_2 from the equation since we know that n_1 plus n_2 is equal to n, and that gives us this expression here; for L dash prime of Theta- L dash minus of Theta. To evaluate the right-hand derivative, we move slightly to the right. When we move slightly to the right, we know, uh, uh, we know, again, we, ah, we know n_2, but we don't know n_1, because n_1 depends on the number of points at the minimum. And so we eliminate n_1, because we know n_1 plus n_2 is n, and we get a nice expression for L dash plus of Theta. Then to check for optimality, we have to check the L dash minus of Theta is less than or equal to 0. Now that's plus of Theta is greater than or equal to 0, which gives us the inequalities that define the median. Let's look at one more case. This is the fractional loss case. Now, the fractional loss is this loss right here, loss of y hat y is the maximum of y hat divided by y minus 1, and y divided y hat minus 1, which has this nice expression as the exponential of the absolute value of the difference between the logarithms minus 1. And it's this function. It's curved. Here we are in the case where y is 2 and we're looking at y hat. If y is 2 and y hat is 2.5, well, then y hat is 25% more than y. If y is 2 and we're looking at, uh, a y hat of 1, well, uh, then we have that y is 100% more than y hat. And obviously as y hat tends to 0, the percentage more that y is then y hat is going to tend to infinity. An empirical risk is therefore the average of the fractional loss. Uh, this is a convex function because the fractional loss is convex, as we can see from the plot. And we're going to call the Theta that minimizes L of Theta, the fractional middle of y_1 through y_n, and that's actually not a standard term, but it's convenient. Uh, this is a plot of the empirical risk as a function of Theta. There are, um, in fact, kinks in this plot. They can be quite hard to see, but they lie exactly above the data, right here, and here, and here, but the function is not piecewise linear between any two kinks. It is curved. The data is right here. And we can see that there is a minimum. This is the fractional middle that's marked in red, that is, uh, doesn't occur at a data point. And this segment right here is actually a curve segment that has a minimum right there. And we go through exactly the same kind of analysis we did before, we split the data into data points less than Theta, and data points greater than Theta, that gives us two sums. For one of the sums, we end up with one of the expressions in the fractional loss. And for the other sum, we end up with the other expression of fractional loss. We can collect all the terms involving one at the beginning, and so then will end up with an expression like this. Now, if we're at a Theta between two particular data points, y_k and y_k plus 1, then L dash of Theta is easy to evaluate. We simply look at this fine expression we have, differentiate it with respect to Theta, and we have this expression right here. Now, because the empirical risk is convex, then the gradient is going to be an increasing function of Theta, and all we need to do to find the minimum is to find out where the gradient crosses 0. So we go through all the data points one at a time, looking at k till we find, uh, uh, till we find one of the k's such that when we evaluate this derivative here at the beginning of the interval, at y_k, we have a derivative which is less than or equal to 0. And when we evaluate that derivative at the other end of the interval, at y_k plus 1, we have a derivative which is greater than or equal to 0. Then we found the interval in which the optimal Theta lies, and then we can look at that derivative and, uh, set it to 0, and solve to find the corresponding Theta. And that gives us this nice expression right here for the optimal Theta. So the procedure to find that Theta is first, we have to find k. We have to find k such that this expression here is, uh, less than or equal to 0 when we evaluate it at Theta is y_k and it's greater than or equal to 0 when we evaluate it at Theta is y_k plus 1. And then we just use this formula to give us the optimal Theta. Let's summarize. The simplest predictor is a constant, its y hat is Theta. Different losses give you different Thetas, um, when you apply empirical risk minimization. And for some common losses, you actually get well-known predictors; the square loss, the predictor is the mean. The absolute loss gives you the median and the tilted absolute loss gives you the quantile. It's worth also me pointing out at this point that even though this section had quite a lot of algebra and technicalities in it, the technicalities in the algebra don't really matter. What matters here is the interpretation of the losses and the interpretation of the results that they give you. You should know that when you're going to do machine learning with a square loss, you're going to get an answer which corresponds to the mean. And if you do use the absolute loss, you're gonna get something corresponding to the median, and have some intuition about how those things behave. In particular, we know that the median is kinda insensitive to the position of outliers, whereas the mean is very sensitive to the position of outliers. Um, the tilted absolute loss is very useful, because very often we really do want an estimate which is preferentially underestimating, or preferentially overestimating the true Y. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_6_empirical_risk_minimization.txt | Hello and welcome to the Empirical Risk Minimization section. This section is, in many ways, the heart of the class. Empirical risk minimization is the process by which predictors are learned from data. So many of the predictors that we've seen, in fact, many predictors overall, have a parameterized form. We think about y hat is g of x and Theta, where g is a function that determines the structural form or the predictor. It might be a neural network, or it might be a linear predictor, or it might be a tree. And Theta is a set of parameters. It might be a vector, or a matrix, or some other data structure and that set of parameters is going to be one of the determinants that produces the output y hat. And when we learn, we're going to learn by choosing the parameters Theta and we're going to leave g fixed. So for example, we might consider linear regression. And y is scalar and we'd have y hat will be g Theta of x. And here, g Theta of x would take the form Theta 1 times x 1 plus Theta two times x2, all the way up to Theta d times x d. And so Theta here is a parameter, it's a d-dimensional vector. And we might write y hat is Theta transpose multiplied by x. We also might have a predictor for a vector y in r_m. Um, if that's a linear regression model, then we would have y hat is g Theta of x. It's also Theta 1 times x_1 plus Theta 2 times x_2, all the way up to Theta to d times x_d. But here, each of the Theta i's is an m-dimensional vector and the x's, x_1 through x_d, are the coefficients which determine a linear combination of the vectors Theta 1 through Theta d. Often, we would write that in terms of a matrix. So we'd write our matrix Theta, which is a d by m matrix. And the ith row of that matrix is Theta i transpose. And then we can express the relationship y hat is g Theta of x, as y hat is Theta transpose times x, just as before. We might have other types of predictors as well. We might have a tree prediction model, in which case, Theta would encode the tree. It would tell us the thresholds at each of the vertices of the tree and the leaf values. Now, we're going to choose which particular Theta to use based on some training data. We're going to have n data pairs, xi, yi, i is 1 up to n. And that's going to be the training data and we will use it to fit the model. And this is called the training process. And there's many different training processes that vary based on what kind of predictor, we're choosing to fit and what our performance metric is. So for example, if we're training a linear regression model and if y is scalar, and then we might use something like least squares. We will choose Theta to minimize the sum of all the data points i is 1 to n of yi minus the predicted value y hat i squared. And this notation, y hat i is- what it means is g evaluated at x i. So y hat i is the prediction when the predictor is fed with the ith value of x. And as a result, we say, well, we're going to choose Theta to minimize the sum of the squares of the prediction error, which is the sum from i is 1 to n of g Theta of x i minus y i squared. Now, that's a very reasonable way of learning a predictor. Um, uh, in- in this lecture, we'll actually cover a- a more general method, which is very widely used and it's very effective. And, uh, it's called empirical risk minimization. And what it is at its heart is a generalization of the least squares idea. And the way this works is we have a loss function. A loss function takes two vectors as its input, a y hat and a y, and it gives you back a real number. And it quantifies how close y hat is to y. Really what it does is it quantifies how badly y hat approximates. Why? Because normally, the loss function of y hat and y is small when y hat is close to y and large when y hat is different from y. Um, so if the loss function is small, we will say that y hat is really a good approximation of y. And if the loss function is large, we'll say it's about approximation of y. And it's very common that we actually arrange things so the loss function evaluated when y hat is equal to y is zero. And the loss function when y hat is not equal to y is non-negative. So the- some very common examples. The first is, uh, the quadratic loss function. I've got a scalar y and therefore, a scalar y hat and the quadratic loss is just y hat minus y squared. If I've got vectors for y and y hat, then the quadratic loss is the 2o-norm or the Euclidean norm of y hat minus y squared, which is just the sum of the squares of the differences between y hat i and y i. Uh, another common loss isthe absolute loss, the- if I've got a scalar y and a scalar y hat again, then the absolute loss is just the absolute value of y hat minus y. All right. Here's another loss. This is the maximum of y hat divided by y minus 1 and y divided by y hat minus 1. So this is the fractional loss or the relative loss. So if y hat is 20% more than y, then y hat over y minus 1 will be 0.2. And, uh, if y hat is, uh, 20% less than y, then y over y hat minus 1 will be 0.2. Uh, another convenient way of expressing this is as the exponential of the absolute value of the log of y hat minus the log of y minus 1. And often, we might scale it by 100 and then it really is percentage error. Uh, we often use fractional loss for, uh, quantities which either range over a very wide range of magnitudes. We saw last time the example of website visits, which can range over many orders of magnitude. Uh, another case where we might use fractional loss is where we're dealing with prices, where very often, we're more concerned with the percentage difference in two prices, then the absolute difference in two prices. And we'll see some particular interpretations of these losses and cases where some of them are better, more naturally suited than others. And we would also see many other possible loss functions in this class. Now we start off with a loss function and when we're using it we can construct something called the empirical risk. And the empirical risk is just the average loss over the data points. So in order to compute the empirical risk, we need to have a bunch of data. We need to have a predictor and we need to have a loss function. And then for each data point, we simply compute the loss between g Theta of x_i and y_i. And then in order to compute the empirical risk, we average that loss over all the data points. And then if the empirical risk is small, the predictor does a good job on average over all the data. At least according to the particular loss function that we've chosen. We usually write the empirical risk as a function of the Theta that parametizes the predictor. Of course, it's also a function of the dataset. But we suppress that in the notation. And you might say, well, this is quite similar to what we talked about before, when we talked about performance metrics. Uh, and that's absolutely true. Um, I mean, in both cases, we're measuring how well a predictor does. Um, the difference is- is that performance metric is something that we use to judge how well our predictor is doing. Whereas empirical risk is something that we're going to use to train the model. We're going to train the predictor. We're going to choose which predictor we're going to use according to a procedure which is trying to make the empirical risk small. And very often empirical risk and performance metric are the same. And you might say, well, why not just choose the empirical risk equal to the performance metric? And we're going to have some, quite some details to s- say about that in the- in the rest of this class. Uh, and usually we t- what we try and do is- we try and pick empirical risk to correspond to a performance metric. Um, sometimes training works better when you don't do that. Here's an idea of some more examples of, uh, empirical risk. For example, if you got quadratic loss and scalar y. Well then the empirical risk is the mean square error. If you've got scalar y and you're look- using an absolute loss function, then the empirical risk is the mean absolute error. So let's talk about empirical risk minimization. It's the method according to which we choose Theta. And the idea is very simple. Choose Theta to minimize the empirical risk. Um, and uh, one way to say that, is to say what we're doing with Theta is we're trying to make the average loss small over the entire dataset. It's a way of getting our predictor to match up with the dataset well. Sometimes you can actually solve empirical risk minimization exactly analytically. And so in particular, if g of Theta is a linear predictor and, uh, we're using a square of loss function. Then the empirical risk minimization problem is the least squares problem. And that's something that we can solve analytically. This is an explicit formula for the optimal Theta. In most cases, it doesn't work like that. In most cases, there is no analytic solution to the minimization problem. There's no formula. And instead, we have to use numerical optimization to find a Theta that minimizes the empirical risk. And usually it's actually slightly worse than that in that the numerical procedures that we use cannot guarantee to find the Theta that actually minimizes the empirical risk. But instead can only guarantee to approximately minimize the empirical risk. Um, but there are also reasons why that's okay. We typically don't want or need a Theta that is a perfect minimizer. But an approximate minimizer is fine. And we'll have uh, more to say on that as well. Now the particular value of Theta that you had- the particular predictor that you get depends on the particular loss that you chose. And how are we going to determine which loss we should choose? Given that we've seen mean, square error mean, absolute error mean, fractional error and many other potential loss functions. And the answer to these kinds of questions is always the same. We validate- we validate against some external test set. Um, and when we do the validation, we don't always validate with the same error measure that we've chosen to train with. We don't validate with the risk. We validate with the performance metric. Now, there is one more wrinkle that we add to empirical risk minimization. Which it turns out can make it work a lot better. And that is this thing called regularization. And regularization works as follows. We're very concerned when we're training a predictor that we don't overfit the predictor. We don't have a predictor that is tuned very well to features of the training data that aren't features that are generically in the data. In other words, if we take some other dataset from the same phenomena, is that, we want to make sure that our predictor is tuned well to those features that occur in that other dataset as well. And not to particular wiggles that only showed up in our training set. The way we do that is we look at the sensitivity of the predictor. We look at it's- how much it responds to small changes in x. So we would call a predictor d Theta insensitive. If for some x near x tilde, g Theta of x is also near g Theta of x tilde. Another way to say it is that, if the features are close, then the predictions will also be close. Um, there are many ways that you can make this more mathematically precise. Um, one is the notion of continuity. Um, and there are other more quantitative notions than continuity as well. Um, but the key point is not so much how you measure the sensitivity of the predictor. It's the benefit you get. By making a predictor which is insensitive to the features x, you make a predictor that generalizes well to your new dataset. And that's particularly important when you don't have a lot of training data. So insensitivity is a good attribute for a predictor to have. Despite sounding like a bad one. And a regularizer, that's a function of Theta that measure- measures how sensitive the predictor g Theta is. So the regularizer is a function r. It takes as input Theta and it returns a real number. So here we've used R_p to indicate the space in which Theta lives. It is some p dimensional space. And if y is scalar and we're using a linear predictor, then p is equal to d. If y is m dimensional and x is d dimensional, then p is equal to d times m. And if we're using a neural network, then p might be a much larger number. So r Theta is chosen such that it's small when g Theta is insensitive and it's large when g Theta is sensitive. Um, there are some cases where we can, in a straightforward way, quantify how sensitive a predictor is. Um, so if for example, in a linear regression model. Where g of Theta of x is Theta transpose x. Then the small sensitivity corresponds to small Theta. And the way to see then, is to look at what's called the Cauchy-Schwarz inequality. Remember the Cauchy-Schwarz inequality? Let me write it down. It says that if I've got two vectors, p and q, and I look at P transpose q. The absolute value of p transpose q is less than or equal to the 2-norm of p multiplied by the 2-norm of q. We're going to need a slight generalization of that to- to say the following result. So I suppose g Theta of x is Theta transpose x and I look at the sensitivity by measuring the norm of the difference between g Theta of x and g Theta of x tilde. Well, that's equal to the norm of Theta transpose multiplied by x minus x tilde. The norm of Theta transpose x minus x tilde is less than or equal to the norm of Theta multiplied by the norm of x minus x tilde. Now the tricky thing here is the choice of norm that we use for that inequality. And first of all, let me, uh, remind you of the result we're using. First of all, Theta transpose times a vector x, the absolute value, less than or equal to the norm of Theta 2-norm, lives in norm of x in the 2-norm. Now, this is true when both Theta and x are vectors. So here we've got Theta is in R_d and x is in R_d. Now I want to also consider the case where y is not a scalar, but y is a vector. And then we must look at what happens when we multiply a matrix times a vector and ask if there's a simple way of constructing a bound on that quantity. So here, let's consider a matrix a and look at the two norm of the matrix a times the ma- the vector x. And we can- we know what that is, that's the sum over i is 1 up to m of a_i transpose x squared. I'm going to make that squared. Well, here what I've done is I've written A in terms of its rows, a_1 transpose up to a_m transpose. So each little a_i here is a vector in R_d. And A here is a matrix in R_m by d. Now this quantity here, a_i transpose x. Well, we know how to bound that because that comes from the straightforward Cauchy-Schwarz inequality. So this is less than or equal to, the sum over i is 1 up to m, of the norm of a_i 2 norm squared, the norm of x 2 norm squared, using the square of the Cauchy-Schwarz inequality. Now of course, we can put parentheses around that because x doesn't depend on i. And this, this quantity here is the sum of the norm squared of each of the comp- col- of each of the rows of the matrix A. And each row of a matrix A, we can calculate its norm squared by taking the sum of the squares of its entries. And then if we sum over all the rows, what we've effectively done is taken the sum of the squares of all of the entries of A. So that's equal to the sum over i, 1 up to m, the sum of a j is 1 up to d of the norm of a_ij. And it's not the norm of a_ij- of a_ij squared multiplied by the norm of x squared, and this quantity is known as the Frobenius norm squared of a matrix. And there's the analog of the Euclidean norm of a vector. It just takes the sum of the squares and all of the entries of the matrix A and square roots them. So we can apply that result to our predictor, our vector predictor, and we find that the norm of Theta transpose x minus x tilde is less than or equal to the Frobenius norm of f multiplied by the norm of x minus x tilde. And this suggests that, well, maybe we should use as a regularizer, the norm of Theta squared and the Frobenius norm. And that is one possible choice and it's a very common choice for a regularizer. If we keep the regularizer small, we choose Theta that makes R of Theta small, then we will make our predictor less sensitive to x. Now, when y is scalar, the most common regularizer to use is simply the sum of the squares of the components of Theta. The 2-norm of Theta squared. That has a name, that's called ridge regularization. For vector y, we take the sum of the squares of all of the entries of the matrix Theta, the Frobenius norm of Theta squared. And that's also called a ridge regularization. Another very popular regularizer is to take the 1-norm of Theta, when Theta is a vector, that's the sum of the absolute values of the entries of Theta. And when Theta is a matrix instead of a vector, we take the sum over all of the entries of Theta. We take the sum of the absolute values of all of the entries of Theta. Now, when there is a constant feature. For example, when x_1 is 1, um, then we do something slightly different when we're doing regularization. This often happens because we very often would like to have a predictor with a constant term in it. So for example, if g Theta is a linear predictor, then g Theta of x is Theta transpose x. And if, uh, x_1 is chosen to be 1, then g Theta of x will be Theta 1 plus Theta 2 transpose times the remaining components of x. And we also do this when we have neural network predictors. In the general matrix case, g Theta of x will look like Theta transpose x. But now as the first component which corresponds to x_1 F_1 will be the first row of Theta. Here we've written that as Theta 1 comma colon. And so the resulting predictor will be g Theta of x is Theta 1 comma colon transpose plus Theta 2 colon d comma colon transpose x_2 colon d. Here, let's be explicit about what the notation means. The notation here means the following. If I have A_23 comma 4, 7, that means take a slice out of their matrix, which consists of the second and third rows. And the columns 4, 5, 6, and 7. I can also write things like this. A_2 colon 3 comma colon, which means take the entire second and third rows in every column. Now- and this is notation that's used by MATLAB and Julia and has also now spread its way back from programming into mathematics. Um, so g Theta of x is constant, plus a linear term in x. Now the constant terms don't affect the sensitivity. We can see this if we evaluate g Theta of x minus g Theta of x tilde, then we get- the constant term simply canceling out, then we're left with the Theta 2 colon d transpose multiplied by x minus x tilde. And as a result, there's no need to regularize the first row of Theta if x_1 is constant. And we use a regularizer which is simply a norm or some other function of the remaining entries of Theta. And we might use the Frobenius norm squared of the last d minus 1 rows of Theta. And that brings us to a regularized ERM. That's a method where we instead of choosing a Theta that simply minimizes the empirical risk, L of Theta, we also try to find a Theta that trades off the insensitivity of the predictor. And the way it does that is we find- try to find a Theta which makes both L of Theta small and r of Theta small. The way you do that is via technical regularized ERM, where we choose Theta to minimize the weighted sum L of Theta plus Lambda times r of Theta. Lambda here is a non-negative number, it's a parameter, it's called a regularization hyper-parameter. And hyper-parameter here means that instead of it being a parameter that's learned from the training data directly, it's a parameter we're going to choose via a different process, which I will tell you about in a second. Now, when Lambda is 0, regularized ERM just reduces to ERM. When Lambda is very large, then Theta that minimizes L of Theta plus Lambda r of Theta is going to be very close to Theta that just minimizes r of Theta. And so we're going to end up with a predictor which is very insensitive but may not do very well at fitting the predicted data. And so when Lambda takes intermediate values between 0 and very large, well then we're gonna get some balance between minimizing L of Theta and minimizing r of Theta. And in most cases, you cannot solve regularized ERM exactly just as you can't solve- maybe solve ERM exactly. And so we use numerical optimization. So with ERM, you're just minimizing the empirical risk L of Theta and you're choosing the Theta that does that. With RERM because you are adding on this term to the objective function Lambda times r of Theta, a Theta that you get does not minimize the empirical risk. In other words, it's not producing the predictor that best fits the training data, it's producing a predictor that fits the training data worse than the ERM predictor. But it is less sensitive than the ERM predictor because the- what you gain is you get a predictor that's making the regularizer small. Now the benefit here is that a predictor that is less sensitive often generalizes better. It makes better predictions on unseen data than the predictor that fits the training data as well as possible. And this is what you see in practice, is that even though there's- there's a predictor, which is the ERM predictor that does the best on the training data. That predictor when you take it and try it on unseen data it's not the best predictor. And you get a better predictor on unseen data by finding a predictor which solves the RERM minimization problem. Which backs off a little bit from fitting the data as well as possible, and instead compromises and chooses a Theta which is less sensitive. We still have to say how do we choose these things? How do we choose the regularizer and how do we choose the regularization parameter, Lambda? And ultimately the answer is the same for all of these questions. Use validation, you have a performance metric, look at the predictor that you got and try it out on some unseen data and see how well it does. Pick the one that does the best. For Lambda in particular, there's a very specific technique called regularization hyper-parameter search. And that's for choosing Lambda. And the way this works is that you choose a set of values of Lambda, say 50 between say 10 to the minus 5 and 10 to the 5, and usually logarithmically spaced. And for each one of those Lambda values, you solve the RERM problem. You minimize L of Theta plus Lambda times r of Theta. Now for each Lambda value, you're gonna get a different Theta, you're gonna get a different predictor. And so Theta, you get a bunch of different Theta values, 50 different Theta values, 50 different Theta vectors, and that's called the regularization path. Now with each of those 50 different Theta values, we have a corresponding predictor and we can evaluate the performance of those predictors on the test set. And what we do is we choose the value of Lambda that gives the best test performance. And the corresponding predictor is the one we use. Now there's a particular case for which we can solve ERM and RERM exactly, and those are called least squares and ridge regression. And I'm going to review least squares and then tell you about ridge regression. So when we have a square loss and linear predictor, we can solve the ERM problem explicitly, exactly, analytically. We have a predictor GC till x is Theta transpose X. We have data consisting of n pairs x_i, y_i. Now the empirical risk is the average of the loss of the predictor evaluated at each of the data points compared with, so the loss evaluate to get a prediction value Theta transpose x_i and the true y_i at that point. Now in our case the loss is just quadratic, and so this is Theta transpose x_i minus y_i squared. And if y is a vector, then this should really be the norm. L of Theta should be 1 on n sum from i is 1 up to n over the norm of Theta transpose x_i minus y_i squared. [NOISE] Now we're going to express this in a convenient matrix form as the Frobenius norm of X Theta minus Y squared divided by n. To do this, we will construct two matrices, X and Y. X is an n by d matrix whose ith row is the ith feature vector x_i transposed and y is an n by m matrix whose ith row is the ith target vector y_i transposed. Now, when we look at this expression, X Theta minus Y, that's a matrix. Its first row is simply x_1 transpose Theta minus y_1 transpose. Second row is x_2 transpose Theta minus y_2 transpose and so on. Now the norm squared of the prediction error, so the loss at the first data point is the norm squared of that row, and that's just the sum of the squares of the entries in the row. I'd like to compute the empirical risk. In order to do that, I've got to sum up all of the different norm squareds, all of the different losses, and divide them by n. So that's just the sum of the squares of all of the entries in this matrix divided by n, and that's 1 on n times the Frobenius norm of that matrix squared. And now I've got to choose the Theta that minimizes that quantity. Now, let's do a quick review of least squares. Suppose I had this problem, minimize the norm of Xw minus v squared. That's a least squares problem, and here w and v are vectors. What's the solution to this? Well, we've seen this before in our linear algebra class. This- the optimal solution is X transpose X inverse X transpose v. We're minimizing over w, and we can get this by expanding this norm and differentiating, for example. Now, suppose I want to solve our more complicated problem, X Theta minus Y Frobenius norm squared, where Theta is now a matrix. I can do that by writing Theta in terms of its columns, w_1, w_2 up to w_m, and y in terms of its columns, y_1, y_2 up to y_m. And then the ith column of X Theta minus Y is simply X w_i minus y_i. Now, the Frobenius norm squared of a matrix is the sum of the squares of the Euclidean norm of the columns. So this quantity is the sum over i is 1 up to m of the norm of X w_i minus y_i squared. Now I'm able to minimize this quantity and I'm minimizing it by choosing the w_is. Now, this is a sum with m terms and the ith term in this sum only depends on w_i. And so in order to minimize this sum, I choose the w_1 to minimize the first term, the w_2 to minimize the second term and so on. Each one of those minimizers looks like this. It's a least squares problem and so the answer to that individual least squares problem looks like that. And that means that when I stack all these columns next to each other, I find that W is X transpose X inverse X transpose Y, and so we can solve a matrix least squares problem in the Frobenius norm by solving m separately squares problems, and it turns out that the matrix inside each of those least squares problems is the same. It's simply X transpose X inverse X transpose. So I just need to solve that once, multiply it by Y, and that will give me all of the Ws at once, which is the matrix W that I wanted. I guess I called that matrix Theta rather than W, let's call it Theta. So that gives us the minimizing Theta, X dagger times Y, which is just X transpose X inverse X transpose Y. This is called least squares regression. Now, if we're doing regularized ERM and we've got a regularizer, which is the norm squared of Theta, and we've got square losses in our loss function, and we've got a linear predictor, then we can solve our ERM exactly as well, and this is called ridge regression. The RERM objective function is just like our ERM objective function with one extra term. The extra term being Lambda times the norm of Theta squared with the norm as the Frobenius norm. And so the overall objective has two terms, both of which have matrix norms in Theta. Now, I can express this in a convenient way. I can stack up my two expressions- my two different norm expressions to make one norm expression, and in order to make sense of this expression here, we will notice the following things. First of all, if I've got two matrices, A, B multiplied by Theta, that just works out to be A Theta B Theta. The second is that if I'm looking at the norm of A Theta B Theta, and that's the Frobenius norm squared. Well, that's just the sum of the squares of the entries of the matrix. So I can break that up very conveniently as the norm of A Theta F squared plus the norm of B Theta F squared. And if I use those two facts, I can see that this is going to have- this product here is going to be X Theta minus Y in the top block and the bottom block is going to be root n Lambda Theta, and then I'm going to take the norm squared of both of those blocks separately and end up with my expression for the objective function of regularized ERM. Now, this problem is precisely of the form that we had for ERM. It's just got a larger matrix here and a larger matrix here. And remember that in the ERM problem when I was trying to solve the minimum over Theta of the norm of A Theta minus B Frobenius norm squared, that was an explicit answer, which was Theta is equal to A transpose A inverse A transpose B. So here I simply need to compute this matrix, X on top of root n Lambda identity transposed times itself inversed times the same matrix again, times the B matrix, which is Y, 0, and that works out to be this expression here as the optimal Theta. And just to see that, if I work out X root n Lambda identity transpose X root n Lambda identity, I get X transpose X plus n Lambda identity. Now, when Lambda is greater than zero, it so happens that this inverse always exists, so I don't need the usual assumption that the columns of x are linearly independent. And so we can explicitly solve Ridge regression in the case where- which is RERM in the case where we've got a quadratic loss, quadratic regularizer, and a linear predictor. [BACKGROUND] Uh, we can- here's a- uh, uh, Julia implementation. Um, you give it a matrix X and a matrix Y and it will return for you the corresponding Theta matrix. [NOISE] Here's the case where we don't regularize the first row of Theta. The only thing that changes is that we multiply Theta inside the regularization term by E, where E is a matrix [NOISE] which has zero for it's first column and has a d minus 1 by d minus 1 identity for its remaining columns. And therefore, E times Theta simply picks out the last d minus 1 rows of Theta. As a result, the norm of E Theta squared is precisely the norm that we wanted, the norm of the last d minus 1 rows of Theta. That translates just as before [NOISE] into, uh, uh, this larger matrix expression. And when I take the product of this matrix transpose times itself, I get the extra term back there, E transpose E. E transpose E, it's a matrix which is square and has an- a d minus 1 by d minus 1identity in the lower right block, and in the top left block, it just has a 0 instead of a 1. So it's an identity matrix which is just missing one of its 1s. And here's a Julia implementation of that case as well. Notice that when we write in Julia, the code looks very like the mathematics, and that's a- a very convenient feature. The only perhaps slight difference here is that there's this nice backslash function in Julia which is a- a convenient shorthand for the explicitly square solution that we've seen before, so A backslash B means A transpose A inverse A transpose B. At least in the case where A is skinny and full rank, that's exactly what it means. Now, let's look at a, uh- an example of how this works in practice. Now, this is a- a dataset for 442 individuals of which has been measured 10 different indicators for diabetes, and those are our 10 components of u. And x will have 11 components because we will have a constant feature added on. Now, the target variable, the y, is simply a scalar, and that's some measure of diabetes progression over one year in some biological indicator of that, and that's on the vertical axis on these plots. And on the horizontal axis in each one of these plots is a different component of u. So the first one here is age, we can see individuals between age 20 and age 80, and some spread over the resulting diabetes indicators. The second one is sex, and those have been labeled here as one or two which is a particular choice of embedding for the two possible values in this dataset. Uh, we have BMI, body mass indicator, BP is blood pressure, and then we have s1 to s6 which are particular blood serum levels. So we've got this data and we're going to try and fit it using ERM and RERM. We will take the data and we'll split it 80, 20, so we'll use 80% of it for training, and 20% of it, we'll use as validation data which- using which we're going- using them- and we will use the validation data to choose the hyper-parameter Lambda. We'll use the Lambda values between 10 to the minus 5 and 10 to the 4. [NOISE] Here, we look at two plots. The first plot is a plot of the empirical risk over the optimal Theta as we vary Lambda. So remember what the approach is here, we've got lambda values varying between 10 to the minus 5 and 10 to the four, and we've got, say, 50 different values. For each one of those Lambda values, we minimize L of Theta plus Lambda times r of Theta. We get a Theta for each one of those Lambda values, and for each one of those Thetas, we can compute the empirical risk on the training dataset. That's this plot right here. [NOISE] Uh, something to notice is when Lambda is very small, [NOISE] down here, we've got a Theta that is effectively minimizing empirical risk. And as we increase lambda, well, then we're starting to trade off and instead of minimizing just empirical risk, we're starting to balance out sensitivity. We can also look at the corresponding r values, the regularizer values, and you can see that it goes the opposite way. When Lambda is very small, the minimizer makes no effort to make r of Theta small, and r of Theta is just whatever it happens to be. As we increase Lambda, well, suddenly, the minimizer is compelled to minimize r of Theta instead of minimizing L of theta. When we get over here, we've got r of Theta is 0. Remember, that's the sum of the squares of the entries of the matrix of- of Theta, which means we've got a predictor which is completely 0, 'cause our predictor is not doing anything, it's completely insensitive to the data. So at this end, when Lambda is very large, we end up with zero predictors, predictors which are constant. They still have the constant term in there, but the rest of the- the last d minus 1 columns of Theta are 0. And at this end- at the other end of the- of the Lambdas, we end up with predictors which minimize the empirical risk. As we increase Lambda, the empirical risk increases so the fit gets worse, but the sensitivity decreases so the regularizer gets smaller. [NOISE] Now, for each value of Lambda, we have a Theta. Theta here is predicting a scalar Y from 11 components of X. Remember, we've got 10 measured variables and the constant feature. And, uh, so C J is just 11 numbers. It's, uh, a 11 dimensional vector. So we can plot those 11 numbers as we vary Lambda. And what you can see is that when Lambda is very small, they have some particular values. And then as we move to the right and increase Lambda, the components of Theta stopped to get smaller. And eventually we end up with all of Theta 0 as we know. Model parameters generally gets smaller because the- the regularized empirical risk minimization is starting to focus on minimizing our Theta, which is the same as minimizing the norm squared of Theta. People call regularization shrinkage because of this phenomena. Now this is the important plot. Here we have two performance blocks, we're looking at mean square error on the training data, that's the blue plot, and on the test data, that's the red plot. As we vary Lambda, remember we've got 50 different values of Lambda. Each one- for each one of those, we have a predictor. That predictor, we can measure its performance in two ways. One is we can measure it on the training set. And the other time we can measure it on the validation set- the test set. Now we already know, what happens on the training set as we increase Lambda. The training error, the empirical risk measured on the training set increases. But look what happens on the test set. As we increase Lambda. Well, yes, we are naturally increasing the training error. But because the predictor generalize better, because the predictor is less sensitive, it does better on the test set. At some point, we get past the point where we've traded off too much performance. And we're making a predictor that's very insensitive, but so insensitive that it doesn't bother looking at the data and doesn't bother making a good prediction. This is the benefit, this is why we do regularization right here in this plot, is for this dip there, which is where we see the regularized ERM doing better than the un-regularized ERM over here. On the validation data, which is unseen data which wasn't used to generate the predictor. And as a result, we might say, pick a value for Theta. Now we could say, let's pick a value for Theta at exactly this minimum. Some people will do that, although actually you're often better off by predicting a value of Theta, picking a value of Theta, which is a little bit to the right. And a little bit more insensitive than the minimizer. As a result, we get a little bit extra insensitivity. Here we're only just testing on one validation set data set. We might like to be a little bit sure that it's going to generalize when we see multiple unseen data sets. And so we put a little bit of extra on sensitivity in it. So we might choose Lambda to be 0.3 or even 1 for this data set. And here we see regularization. It's improved the performance not by a great deal in this example, it went from 0.63 down to 0.58 or something like that. Sometimes the benefits are much more dramatic. And that depends on the characteristics of the data and how much data you have and whether your particular structure of your predictor is prone to over fitting that data. And some say sometimes regularization is massively important and an unregularized predictor and ERM predictor will not work at all, whereas a regularized predictor can work very well. Let's summarize, empirical risk is a function of the parameter Theta that measures the fit on the training data set. It is often but not always the same as the performance metric. ERM chooses theta to minimize empirical risk on the training data set. Regularized ERM doesn't. Regularized ERM chooses theta as a trade off between two different objectives. The first being small empirical risk. Are you a good fit on the training data? And the second being predictor insensitivity. We choose the loss function and the regularizer function by validation, using our performance metric. We choose Lambda by validation. Now when we have quadratic loss- quadratic regularization and the linear predictor, we can find the optimal parameters using least squares. And that's either called least-squares regression when it's unregularized, or it's called ridge regression when it's regularized. In general for ERM, when we don't have quadratic loss, we don't have quadratic regularizers or we've got complicated predictors, we have to use numerical optimization. We're gonna cover this in detail later in the course. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_3_predictors.txt | Hello, welcome to section 3 of EE104. This is about predictors. So one of the primary tasks in machine learning is data fitting. We have a variable y and variable x, and we think they're related by some function, say y- say y is equal to f of x. We think Y is approximately equal to f of x. Here, we think about X as an independent variable and y is the outcome or the response. We might call it the target, or the label, or the dependent variable. Uh, very often, we have y is in Rm and x is in Rd. So these are both vectors. Um, very often, m is 1, and so the outcome is a scalar. And we say, well, the target is related to the independent variable. Approximately by y is f of x. And we're going to be told x and we'd like to be able to predict y. And we don't know what this function f is. And of course, there may not be such a function f. It may just be that y and x are a bunch of unrelated data, but they're related by some other variable that we don't know. Or if you put an x in at one time, you'll get a y. You put an x in at a different time, you'll get a different y. And, um, there's a lot of noise, there's a probabilistic relationship between y and x rather than a purely deterministic one. Usually, these x's, these variables x- usually, these variables x are often vectors of features. That is, they are descriptors of the underlying data rather than the underlying data itself. So for example, if we have documents, for each document, we might make a histogram of the word counts and count the numbers of all of the different words there. And that would be x, that vector of the- of word counts. Er, if we had patient data, then x might be patient attributes, tests re- results, systems. Uh, let's start again. Very often, x is a vector of features rather than the raw underlying data. So for example, if we add a document, then the corresponding x might be a account of all of the different words in the document, the word-count histogram. If, uh, we had patient data, then x might be a list of different patient attributes, a list of test results, perhaps a list of symptoms. If, uh, data consists of customers, then for each customer, we'll have an x, which would be the purchase history- history of that customer. We have a- a general framework to construct features from the raw input data. So the raw input data might be- it might be a vector itself. It might be a word or a document. It might be an image. It might be video, it might be audio, um, or it could be a list of such attributes. Um, it could consist of multiple attributes. We're going to, uh, uh, call that input data u. And we're going to map it under a function Phi to construct x, which we will call the corresponding feature vector. This function Phi has a name, it's called the embedding, or the feature function or the feature mapping. And sometimes Phi is very simple, or sometimes it's extremely complicated. Um, and, uh, often, we will, uh, take a particular property. Um, we will make sure- since Phi is going to be a vector, we'll make sure that the first component of phi, um, is always 1, and so that we would know by x_1 or Phi of u_1. It's the constant feature, um, which, uh, we'll explain the reason for that in just a second. Similarly, we embed or we construct features for the output data V and we will call those y, and we will have y is equal to Psi of v. So our data comes in pairs. Ui's and vi's the ith data element is a pair ui, vi. We map those to a pair xi, yi. And we have n data points, x_1 to x_n and y_1 to y_n. And once we've embedded them, once we've constructed features, then we no longer need to look directly at u and v. Instead, we can focus our attention on x and y. So we will have nd dimensional vectors, x_1 through x_n, and nm dimensional vectors y_1 through y_n- y_1 through YN-. And so we'll refer to the pair xi, yi as the ith data-pair observation. A particularly, uh, evocative term is to refer to as the ith example from which we wish to learn. And collectively, we would call the entire set of x's and y's a dataset. Um, we're also- so that's our, um, fundamental data that we're going to use in order to construct some fitted model for the relationship between x and y. We also might have prior knowledge about what f might look like. For example, if f is a- a function, we might say f is, um, smooth or continuous, which means that if x and x Tilde are two vectors that are close to each other, then f of x and f of x Tilde should also be close to each other. Another thing we might no prior knowledge about f is that we might know that y is always non-negative. And so we would like to ensure that property of f holds in our model. We're going to learn from x's and y's. And we're going to see a whole bunch of y values. And no matter what y- y values we- we see, we would like to ensure that the model that our learning algorithm produces has the property that f of any x is always non-negative. And there are many other such prior knowledge. Such- there- and there are many other examples of prior knowledge that one would have. So the thing we're going to construct is called a predictor. It's, uh, a model that takes an x and gives you a prediction for what y would be at that x. Uh, we denote that by G. It's a function that takes vectors in R_d x's and gives us vectors in R_m y's. The feature vector x, the prediction, we would denote by y-hat. Y-hat is g of x. And the predictor G is chosen on the basis of two things. The- the data that we've seen and the prior knowledge that we have. And that means that we can, in terms of the raw data, construct a prediction using a- a new rule data record. So if you give me a new u that I've never seen before, I can map it under Phi to embed it and give me an x, construct a feature vector corresponding to that u. Then I can take that x and feed it into my predictor, and to construct g of x. And that's an estimate for y, a prediction for y. And then I can unembed that by applying the inverse of the feature map Psi to that y-hat to give me an estimate of v, which we'll call v-hat. Um, and that means that I can test my- the performance of my predictor in the original- in terms of the original data, rather than in terms of my x's and y's. And sometimes, Psi is not invertible and then there is a slight variation of this formula that we will come to. Of course, you can also evaluate how well does this predictor do on the data. And we can take, uh, a data pair, the ith data pair, it's xi, yi. You'll feed xi into g, we'll get a y-hat. We will call y-hat i. And, uh, we would like y-hat i to be close to yi. And that means that the predictor does well on the data. That's a very reasonable thing to acquire of the predictor. Of course, our real goal is not to have the predictor do well on the data, but to have predictor- the predictor do well on potential data that we've not yet seen. When somebody gives us some new x or some new u, they could quite reasonably ask, well, what did you learn from all that earlier data that we gave you? And the answer should be, well, from all that data, I learned that these kinds of x's give rise to these kinds of y's, and let me predict a specific y that would be generated by the x that you just gave me. And when we're working with predictors, we typically don't just say, well, here's the predictor. We typically have a parameterized form for the predictor. So the predictor is a function of two things, is a function of x, and it's a function of some parameters Theta. So we'd have y hat is g of x and Theta. Sometimes we write this as a subscript. So we'll write y hat is g subscript Theta of x, and that just makes it easy to refer to g- g Theta as a function which takes an x and gives us a y. And what this does for us is it specifies a form, a structure allowed predictors or predictors that we like. And we're only going to pick predictors that correspond to a particular Theta. And our job becomes instead of choosing an arbitrary function that maps x to y, choose a Theta, and then evaluate g of x Theta. So Theta is a parameter, it- it's usually a vector. It's a parameter for the prediction model. Often Theta is in some Euclidean space. Here we've written RP. Sometimes it's a matrix, we'll see that. Um, and sometimes it's- it's a list of there's more than one parameter, more than one parameter vector, and more than one parameter matrix. Um, choosing a particular Theta that's called tuning or training or fitting the model. And the learning algorithm is a recipe for choosing Theta given data. So for example, we might have a linear regression model that says that y hat is Theta 1 x1 plus Theta 2 x2 all the way up to Theta d times xd. Y hat is this linear combination of the Thetas, or equivalently a linear combination of the Xs. And, uh, our job is to pick the Thetas so that, uh, this is a good predictor. We've already seen in previous classes on linear algebra that you can fit such a linear regression model using least squares. And that's picking Theta to minimize the mean square error. Of course, there are lots of other methods for picking- for doing linear regression for picking the Thetas even in a linear regression model. And we will talk about some of those in this class. I wanna talk about, uh, a special class of predictors called nearest neighbor predictors. They work as follows. We're given a dataset, x_1 through x_n, y_1 through y_n and the predictor says the following. You've got some new x and you'd like to predict the corresponding y hat. The way you do it is out of all of the data that you've gotten, you find the- the x_i, that is the nearest to x. And then your prediction for the corresponding y is simply y_i. And that you define to be g of x at that x. And so this is extremely intuitive. If you've got some new piece of data and you want to make a prediction about what y is at that data point at that x. Well, why not- why not look for the closest example that matches it and say, well, uh, that's probably the- it's closest example we've got so let's predict that, uh, the y we get is the y we got in that previous example. Um, it really is of course a parameterized, uh, predictor. Here the parameter is a bit interesting because the parameter's a full dataset, you can think about Theta as x_1 through x_n and y_1 through y_n, the entire dataset. And we don't have to choose the parameter. The parameter is given to us with the data, it is the data. So training is easy. There isn't any training. There's no computation to be done. All we have to do is keep track of all the data we've seen. And whenever we get a new query point, a new x, we just look for the closest x in the dataset and return the corresponding y. This means that g is going to be a piecewise constant function of x. Because when x is closer to x_i than all the other Xs, well in g of x is just y_i. We can see that on this plot right here. Uh, let me see if I can highlight some points on this. So these are data points. And the orange line, which I will highlight here in black. This, the way it goes about here is the predictor. And it tells us that for example, if we receive a, uh, value of x of 0.2, then we should predict the corresponding value, y of 0.47. And so even though we've only got data points at specific points, we've fitted a curve, not a curve really, a piecewise constant- piecewise constant function through those points. Something to observe is that the function is- has discontinuities. Let's look at some of those discontinuities. So here's one, here's another, here's another. And these discontinuities are exactly halfway between two adjacent data points. So as we vary x when we're here, we are, our closest data point is this red point here. And then as we increase x, we suddenly switch to the closest data point being this data point x right here. And as a result we- we switch our prediction also from predicting the y corresponding to this data point to predicting the y corresponding to this data point. You can do this in more than one dimension. It doesn't have to be, er, a one dimensional x here. One can do this when x is D dimensional as well. Here's a two dimensional case. The two dimensional case here we have, ah, data points which have, er - which are right here. I'll - I'll highlight these blue points. And - um, and now associated with each data point, associ- associate with each x there is a region. So for example, for this data point right here, there's a region which is this region right here, which is a polyhedron. It's got straight boundaries. And this is the set of data points for which the corresponding - this is the set of x's for which the corresponding closest data point is this one. If I'm here, my closest data point is this one. Um, If I move across this boundary, suddenly I switch to the case where my closest data point becomes that one over there. So each data point has a corresponding region. And those regions are called the Voronoi regions associated with the data. So the function, the predictor is piecewise constant. It's constant on Voronoi regions. And here's a three dimensional plot of it. This is x1, this is x2, and this is the prediction y or y hat. And, um, here we have, er, again, we have data points here. And then this is the function values according on each data point. So right here on this Voronoi region, right here, that Voronoi region right there. That's the Voronoi region corresponding to this data point. The value at that data point is y is 1, y i is minus 1, I'm sorry. And so if we are given a new x, which is anywhere in that region, then we will predict that y hat is also minus 1. So now we've seen the nearest neighbor predictor. We can extend that idea and talk about the k nearest neighbor predictor. So the idea here is that you pick a number, k, an integer. And then you say, well, you give me an x and instead of just looking at the nearest neighbor to x, I'm going to look at the k nearest neighbors, x i1 through x i k, amongst the given data. And what we're going to predict is we're going to predict y hat to- as follows. We're going to take those k nearest neighbors. The nearest x i's. Each one of them has a corresponding y i, until we wrote k corresponding y i's. And we're going to construct the average of those k y i's. And that's going to be y hat. So this is a generalization of the nearest neighbor predictor. Um, and, ah, it's certainly very useful, it's very well used. It - you might see that, er, for example, makes you less sensitive to, ah, noise in the data. If the data is very variable then the k nearest neighbor predictor is going to perform some averaging. It's going to smooth out the data. It's going to remove some of the noise. And there are many ways you might extend this idea. You might use a weighted average to form y hat or you might pre-process the data. You might say, well, I'm only going to look at, ah, the data instead of looking at it as a collection of data points, I'm just going to look at it as a collection of clustered data points, collection of clusters, of data points. And then instead of picking the nearest neighbor, I might pick the newest cluster and a bunch of different things you could do. Here's how you compute the, ah, k nearest neighbor predictor. This is Julia. Um, so we have a function. It takes an X, a Y, and little x, and a k. So here this matrix x, it is an n by d matrix. The ith row of that is the ith x point, it's x var -variable corresponding to the ith data record. Y here is, ah, also, ah, an n by m matrix. It's, ah the corresponding targets. The ith row of Y is the target variable corresponding to the ith row of x. Here little x is, um, the query point. We're given a new x at which we want to predict a y. That's little x. And k here is the k nearest neighbors, it's how many neighbors we're going to consider. Er, so this first line of the code here just gives us the n, the number of data points. It looks at the number that returns the number of rows of the matrix x. Here, we look at each row and subtract it from the query point. Look at the sum of the squares of the entries of that vector. And fair, we make a list of all the sums of the squares for each of the different data points. Um, for i's 1 to n. And that's in dists_sq. That's a list of all of those different distances. The ith element in that list is the distance from x i to x. And then this function, sortperm, it gives us the indexes - the indices that point to the sorted, ah, entries of dists_sq. So the first entry of nearest neighbor idx's, is an integer. And if we look at the, um, ah, so - so if the nearest neighbor index is, ah - if the first entry of nearest neighbor index is a 7, then dists_sq 7 will be the smallest entry in dists_sq. And similarly, in second entry in nearest neighbor indexes points to the second smallest entry in dists_sq. So if we look at the first k by picking out the first k entries as returned by sortperm, that gives us k numbers that point to the corresponding closest data records to x. And then what we do is we construct, the corresponding - take the corresponding y's and we take their average and we return that as y hat. Um, this is, of course, if you were to write this in Python or MATLAB, it would look very similar. The nice thing about this code is that really, it looks very much like the mathematics that one might write to describe this algorithm. And that's a nice feature that, er, it's worth aiming at throughout this class, is that if you've got a short piece of mathematics that describes something you are trying to code, then the code should kind of correspond line by line to the mathematics. And that way it's easy to debug and, ah, easy to read. And so this is the case where k is 2 on the same data that we had before. And now this is interesting in that regions. Er, let me mark a region. So for example, this region right here. Regions are still - still have polyhedral, still have straight line boundaries. They still polyhedra, but they now, rather than corresponding to a single data point, they correspond to k data points. In this case, k is 2 . So every point in this region right here, its two closest data points, this one and this one. And you can see that if I'm in this region and I cross over into this region, then I'm going to switch from having this one and this one, that's my closest data points to having instead this one and this one, that's my closest two data points. So now regions are associated with pairs of data points. There are more regions because there are more pairs of data points than there are individual data points, although not every pair of data points has a non empty region. And we can plot the corresponding k nearest neighbors estimate. Here it is on the right hand side. Um, there are more regions, but also the - the function itself is somewhat flatter, has less variability. Um, and that's something we expect because our k nearest neighbors, our predictor does some averaging. And so instead it's going to smooth out the function. It's still a of course piecewise constant. And instead what we can do is another variant of the k nearest neighbors. This is the soft nearest neighbor predictor. And what we do here is we take, uh, a weighted average of the measured y, but the weights that we use aren't fixed, they depend on x. And so here's a very common choice of weights, uh, right here. Uh, these weights are interesting. Uh, one thing you might notice is that, uh, uh, if I sum thereof from i is 1 up to n, then w_i is equal to the sum of i of w_i is equal to 1. Let's write that down, uh, perhaps you know. So- so another thing to notice is that there's a parameter here. The parameter is Rho. Uh, it's- it's- it has the, um, uh, uh, dimensions, or the scaling like a length. So in particular, uh, if you look here in the numerator, I put the norm of x minus x_i squared divided by Rho squared. Uh, so when Rho is very small, well, 1 over Rho squared is very large. And, uh, so e to the minus something divided by Rho squared is going to be extremely small. Unless where- unless there's something in the numerator, unless the x minus x_i norm squared is also very small, and so the similar size to- to Rho squared. And so what that means is that, um, when Rho is very small, this quantity is going to be 1 when x is close to x_i, and 0 almost everywhere else. Um, and that means that this weighted sum that I'm going to get right here is going to be the nearest, the value of y at the nearest point x_i, the nearest i, the- the- the, uh, the nearest data point. Um, so this reverts to the nearest neighbor predictor when Rho is small. When Rho is large, it becomes a little different. Let's look at what it becomes. So I won't go through this code, but this code is- because it's very similar to the previous code. Um, it, uh, uh just explicitly follows the mathematics again. So here the- here is the, uh, uh, this- the, uh, soft nearest neighbors predictor. Uh, this is the case where Rho is 2 on the left here. And you can see it's really rather smooth. Uh, when Rho is 1, it's becoming, uh, uh, a little bit more similar to the, uh, uh, the nearest neighbor predictor. Here, we can see when Rho is 0.5. Over here, we- we can see something that's really quite close to the- the nearest neighbor predictor. And, uh, and the nearest neighbor predictor is here on the left, and we can really see that there's a correspondence between those two. So this is a nice way of smoothing out the, uh, the nearest neighbor predictor. And it could- instead of having a discontinuous piece-wise constant function, one has, uh, a smooth function. And we can make it and as smooth as we'd like. Okay. So let's turn now to another class of predictors, the linear predictor. Um, here g has the form g of x. Theta is Theta transpose multiplied by x. So when m is 1, that means y is a scalar. The parameter Theta is a vector in R_d, and Theta transpose x just returns for us a scalar. When m is greater than 1, uh, Theta here is a matrix which is d by m. And so Theta transpose times x is gonna return for us an m-dimensional vector, the same size as y. Um, this is also called a linear regression model. And, uh, y hat, which is, uh, our g of x is Theta_1 times x_1 plus Theta_2 times x_2, all the way up to Theta_d times x_d, which is a linear combination of the Theta_i's with coefficients x_1 through x_d. Here, if, uh, m is 1, then Theta_i is just the ith entry of Theta. If m is greater than one, then Theta_i transpose is the ith Rho of Theta. And, uh, so each of the Theta_i's has, uh, uh, uh, a dimension same as y. So we can, uh, yeah, interpret this. Um, uh, suppose y is scalar just for the, uh, to make the, uh, explanation simple. You can, of course, extend all of these ideas when y is a vector. Well, then the- the linear predictor has the form y hat, which is g of x, is Theta_1 times x_1 plus Theta_2 times x_2, all the way up to to Theta_d times x_d. And that means that, uh, we can interpret the Thetas very directly. So, uh, Theta, um, Theta_3 is the amount that their prediction y hat increases when x_3 increases by 1. Uh, if you happen to be in the case where your x_3 is a Boolean, so then it takes value 0 or 1, then, uh, Theta_3 is going to tell you how much y hat is affected when x_3 turns on or turns off. Uh, if Theta_7 is 0, well, that tells us something also, it tells us that t he prediction does not depend on x_7. Uh, and in general, if Theta is small as a vector, it means that the prediction is insensitive to changes in x. And there's a very little, uh, nice little one line computation here that illustrates that. Um, here, we're looking at the- the absolute value of the difference between g of x and g of x tilde. We've got two different x's, and, um, well, g of x, Theta transpose x, and, uh, g of x tilde is Theta transpose x tilde. And, um, so gathering together the x's, we get that- the- that's equal to Theta transpose x minus x tilde. And now we can use the Cauchy-Schwarz inequality. Remember what that is? Let me just write it down. That says that for two vectors, say p and q, that the absolute value of p transpose q is less than or equal to the- the norm of the vector p multiplied by the norm of the vector q. And, uh, of course, we know that p transpose q is the norm of p multiplied by the norm of q multiplied by the cosine of the angle between p and q, and cosine of t minus 1 and 1. So that means that this quantity, the absolute value of this scalar product, is less than or equal to the norm of Theta times the norm of x minus x tilde. And so if Theta is very small, so if the norm of Theta is small, then the difference between g of x and g of x tilde can't be very big. There's- it's bounded by that small quantity multiplied by the norm of x minus x tilde. So by making Theta small, you make your predictor insensitive to the data. And it doesn't sound like a good idea, but as we will see in this class, that turns out to be an extremely important idea that we will- we visit again and again. Uh, in many cases, where the first feature is constant, we made x_1 as 1 and that's a choice. And when we get to choose how we are constructing the features, what our maps Phi and Psi are. And so we often choose x_1 to be 1, the constant feature. And the reason we do that is that then, the, uh- uh, the linear predictor, so g of x, is Theta transpose x, which has this term at the beginning of it, Theta 1, a constant term which doesn't depend on x, plus a term that is linear in x. So it's linear plus constant. And Theta 1 is called the offset or the constant term. Some people will call it the bias in the predictor. Um, it's the prediction when all the features, except for the constant, of course, are 0. Some people will call that an affine predictor, linear plus constant predictor. And if it was a linear predictor, then when x was 0, we would have to have y-hat be also 0. Here, 0 is somewhere down here is the origin. And, uh, we can see that our prediction doesn't pass anywhere near the origin. Another very common predictor, um, is, uh, uh, the polynomial predictor. So here, what we're doing is we're taking an appropriate embedding, we're choosing features. Um, and by choosing features in this way, we can get a non-linear function of u, even though we're using a linear predictor based on x. But you do it, is you say, I'm going to construct this feature map Phi to be a vector. So you give it a u, say u is a scalar. And we're gonna construct a vector feature embed- uh, feature map, which returns 1u, u squared u cubed, all the way up to the d minus 1th power of u. Now, people would call Phi, uh, polynomial power embedding. And it's interesting, it's taking a- a- a one-dimensional, uh, independent variable u and making a bigger regression problem. It's embedding it in a d-dimensional regrading- uh, uh, regression problem. And that means when we construct a linear predictor, we're gonna have y-hat is Theta transpose x. But actually, Theta transpose x is Theta 1 plus Theta 2u plus Theta 3u squared all the way up to Theta d u, d minus 1. So it's a linear function of x, but a polynomial function of u. [NOISE] Here's an example. So I've got, uh, a bunch of data points here and fitting a quadratic here or a cubic right there. Uh, there are other nonlinear predictors. Uh, here's one. This is a tree based predictor. Uh, and the idea is- is that, uh, we have a predictor which is represented by a partially developed Boolean tree. Um, and, uh, at each, uh, vertex, which is not a leaf, such as this one or this one, we have a variable, an index into one of the components of x. And associated with that index or that component is a threshold. And that's the threshold which is the decision part at that node in the tree. So we start off coming in with x. We say, is x second component x_2 less than or equal to 0.8? Or is it greater than 0.8? If it's greater than 0.8, we go down here and we return a prediction y-hat is 0.6. If it's less than 0.8, less than or equal to 0.8, we go down here. And then we end up with another- at another, uh, non-leaf node. And that's associated with another decision. And that decision is x. Is x_1 less than or equal to 0.2? Or is x_1 greater than 0.2? If x_1 is less than or equal to 0.2, then we go down here and we retur n y-hat is 0.25. If x_1 is greater than 0.2, then we go down here and we returned y-hat is 0.5. So the leaves are associated with the values. These are the leaves and the non-leaf nodes are associated with conditions. And the predictor is a piecewise constant function of x. And we could plot it as a piecewise constant function of x, um, at least when the tree is small enough. Here's a one-dimensional x. Here, what's the tree for this? Well, we can work it out. Let me switch to a smaller pen. We've got x. First decision. We have a choice, we can- there's not- there's more than one way to write any given decision tree. Let's do- let's look at this point right here first. And we'll say, if x is less than 0.2, then we're going to have y-hat is 0.25. Here, we wrote another decision to make and this one is going to be here. And the decision is, s o here, x is greater than 0.2. And here, where the decision is- is x, uh, less than 0.8 or is x greater than 0.8. And if there's less than 0.8, then we're going to have y-hat equal to 0.5. And if it's greater than 0.8, we're gonna have y-hat equal to 0.6. So there's our decision tree corresponding to this, uh, uh, predictor right here. Um, and of course, if we were in higher dimensions, then we're slicing up the vector space corresponding to rd into, uh, rectangular regions. And, uh, at each vertex- at each non-leaf vertex of the tree, we're making another cut. Let us look at another class of predictors. And these are the neural network predictors, very important, very powerful class of predictors, extremely widely used. So what is a neural network? A- a neural network, this is a feed-forward neural network, which is perhaps the most common type of neural network. There are others as well that we will discuss, but this is the most common kind. It's a- a composition of functions. So here I've got, uh, three functions, g_1, g_2, and g_3. And I take x, and I apply it to g_1. And then I take the result and apply it- it to g_2, and then I take the result and apply it to g_3. And that gives me y-hat. And you can have any number of, uh, functions here. Here, we've just got three for the sake of example. Now, we write that using this notation, that the predictor g is g_3 composed with g_2, composed with g_1. And this symbol with a little circle, uh, means function composition. Uh, these g_i's are called layers of the neural network, and here we have three of them. Uh, we can split up this function composition into three independent things. First of all, we apply x. We take g_1 and apply it to x and get z1. Then we take g_2, apply it to z_1 and get a z_2. And then we take z_2- uh, we take g_3 and apply it to z_2 to get our output y-hat. And these intermediate variables which we've introduced here, z_1 and z_2, are called activations. They're the outputs of the ith layer. And to make it all consistent, we might make x here, g- z_0, and we might make, uh, y-hat here z_3. And then that's a, uh- a very reasonable notational convenience that we'll make useful. Um, and, uh- and here, it says here, d_0 and d_3. And let me tell you what those are, because one of the things that's going on here is that each one of these z's can have a different dimension. And we will have z_i is going to be in R to the di. And that way, this notation d_0 is d and d_3 is m, is simply expressing our usual notational convention for the dimension of x being d and the dimension of y being m. Now you can visualize, this is a nice graph. X comes in, goes through g_1, gives us z_1, goes through g_2 gives us z_2, goes through g_3, gives us y hat, which we might call a flow graph for the data. Uh, these layer functions, uh, have a particular form in neural networks. Here's what they look like. They're the composition of a function h with an affine function. So here, Theta_i transpose 1, z_i minus 1. What this parentheses notation means, let me tell you what this means. If I write x, y, z in parentheses, that's exactly the same as writing a column vector x, y, z with them stacked up on top of each other. When I write Theta_i transpose one comma z, what I'm really saying is that I've got some vector Theta_i transpose 1, z. And that's, of course, I could expand that as saying, that's Theta_i 1 times 1 plus Theta_i, 2 times z_1 plus all the way up to Theta_i, whatever the dimension is, d plus 1 times z_d. It's a linear combination of the z_i's. And, of course, I don't need to write that one in there because it's just 1. It's just Theta_i 1. So I take a linear combination of Zs with a constant term and offset, and then I apply a function h to it. Now remember that, uh, uh, the Zs, uh, vectors and the output can also be vectors. And that would mean that Theta_i would be a matrix, and its dimension would be R d_ i minus 1 plus 1 by d_i, because its input is a vector z_i minus 1, which has dimension R d_i minus 1, and its output is going to have dimension R d_i. And this is the parameter associated with the- with layer i. These are also called the weights. Now what I'm doing with that vector, which is, uh, produced by this affine function, is I am applying a scalar function h to it. And that scalar function h is called the activation function, and it acts element-wise. So it's applied to each entry of the vector separately. Um, so what that means if the- if I've got a vector z and it's equal to h of w? What that means is that z_1 is h of w_1, z_2 is h of w_2, and so on. It's really just a shorthand for applying the function separately to each component. Now the scalar function h is called the activation function. Uh, there are many different activation functions that are used in neural networks. The most common one probably by far, is this one called ReLU. Uh, let's, uh, take a look at that. So the ReLU is this thing. It's, uh, usually denoted by x parentheses with a subscript plus, uh, also written max of x, 0. Uh, what it is is that we can just draw a little graph of it here. [NOISE] Uh, this, uh, is um, is x, and this is x plus, and it's graph, 0 for negative x and just x reported of x. So if x is negative, x plus is just 0. If x is positive, then, uh, x plus is just x. Let's call it a ReLU, ReLU is short for rectified linear unit. Uh, another one that's quite common is the sigmoid function, e_x divided by 1 plus e_x. Um, and we will see a few others as well that are commonly used for activation functions. Um, so once one knows the matrices Theta 1 to Theta n. Once one knows the activation functions, one can construct the functions g_i. We can compose the functions g_i, to have predictor, and, uh, that's a neural network predictor. Uh, it's often drawn as, uh, a network. Uh, so here is a three layer neural network. Remember what the functions are. Our first function, uh, g_1, takes these two variables, x_1 and x_2, and constructs these four variables. So here, the dimension here is d is 2, d_1 is 4, that's the dimension of the variable at the first layer, it's the first activation. And we have, uh, d_2 is 2. And then we have M is 1, which is just d_3 is 1. And the function that maps this this way, is g_1. The function that maps this, this way is g_2, and the function that maps this, this way is g_3. But it's quite convenient to draw this as a network like this where we've expanded out each of the variables. Because then one can see several things. One can see that for example, the first component of z_1 depends on x_1 and x_2, as does the second, as does the third and as does the fourth, and the coefficients that enter into this dependency are the entries of the Thetas. So for example, z_1- the z_2 component 1 is a linear combination of the z_1, z_1_1, z_1_2, z_1_3, z _1_4, plus a constant term passed through the activation function. So it's h of a linear combination of these plus a constant term. The coefficient of z_1_1 is going to be Theta_2_2_1. Here we can also see that the z_1_4 it's- its coefficient is a- is a the activation function composed with a linear combination of x_1 and x_2 plus a constant. The coefficient in that linear combination is Theta_1 2_4. So these kinds of network diagrams are, um, often drawn for neural networks. Um, very often, the Theta_i's are sparse matrices. So we have lots of zeros. And that would mean that many of these arrows are- have zero coefficients associated with them. In which case, we typically wouldn't draw arrows that have zero coefficients. Um, and so each of the vertices in this graph is a component of an activation. And the edges, our entries of the weight matrices. Now we would call them individual weights or scalar parameters of the neural network. And so here, we have three layers, and we have specific dimensions of this particular neural network. Now by choosing the parameters, what are we going to choose? We've got to choose Theta_1, and Theta_2, and Theta_3. This is the dimension of Theta_1. Here, it's, uh, 3 by 4. See the dimension of Theta_ 2, it's 5 by 2. And this is a dimension that Theta_3, it's, uh, 3 by 1. So there's quite a few parameters that enter into the Thetas. Um, and of course, the activation function is a choice as well. Well, that's not a parameter. And we're going to pick the predictor from this family of predictors by choosing the Thetas. The Hs are part of the structure or the form of the predictor. By choosing these Thetas, one can end up with a huge number of different functions which all look like crumpled pieces of paper. Here's one for this particular choice of Theta. Um, you can see right here. Uh, you can see that it's, uh, uh, piece-wise linear function. Uh, but it's got complicated, uh, joins in the function. There's a complicated join there and there. And, um, uh, now one can end up with all sorts of very strange functions, uh, by picking the Thetas. And this is simply with a- a three layer neural network with quite a small number of variables. Uh, neural networks can have millions of parameters. And- and then, it becomes impossible to visualize or understand what those parameters mean or how they work. Um, so in summary, a predictor is a function. It takes Xs which live in R_d and gives us back Ys, which live in R_m. And it's meant to predict the outcome y given a particular feature vector x. There are many different types of predictors, nearest neighbors, trees, linear predictors, neural networks, many others. Um, and most of them are parameterized. We'll write them g of Theta of x. Um, g fixes the form, and Theta are parameters that we choose. And we're going to choose them so that we fit the data as well as possible. And that's called training predictor. And we're going to see later how training is done. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_11_neural_networks.txt | Welcome to the section on neural networks. So a neural network is a nonlinear predictor, y hat is g Theta of x, which has a particular layered form. One way to think about a neural network is that it incorporates aspects of feature engineering into the predictor. So one way of, uh, interpreting this is as automatic feature engineering. That's one of the strong reasons why neural networks are so popular, is that for very complicated classification and regression problems, it's not obvious what the right choice of features is and having, uh, a system which can automatically determine the right choice of features is very powerful. Uh, as a consequence of this, the number of parameters, uh, which, uh, specify the particular neural network can be very large. In other words, the dimension of the variable Theta. Uh, and that means that p here could be in the millions or tens or hundreds of millions and, uh, that can make training difficult and, uh, time-consuming. So neural networks can easily take weeks to train. Uh, but however, the other side of the coin is that they can perform very well, uh, especially when you have a lot of data to train them. Uh, the resulting predictor that you get out after training is extremely hard to interpret. Essentially, it's a- it's a black box. It takes an x and gives you back an estimate y hat, and how exactly it did that is somewhat mysterious. Uh, if you compare that with in particular linear predictor, where y hat is Theta transpose x, then the meaning of the individual entries of Theta is very clear. It tells us how much increasing certain components of x affects the prediction y hat. For a neural network, it's extremely difficult to interpret what the particular parameters Theta i actually mean. So in a, uh, neural network in particular, here we're talking about feedforward neural networks. There are other types of neural networks which we will see later in the class. So a feedforward neural network consists of a composition of functions. Y hat is g_3, composed with g_2, composed with g_1 of x and that's in the case where we have three layers, but we might have any number of such functions, and those functions are called layers. We might write this using composition notation, g is g_3 composed with g_2 composed with g_1, where the composition operator is denoted by a circle. Uh, some people would call this such a neural network on multi-layer perceptron. We often write the, uh, predictor composition in terms of individual variables. We might say z_1 is g_1 of x, then z_2 is g_2 of z_1, and then y hat is g_3 of z_2 for our three layer example. And each of these vectors z_i is called the activation of the output of layer i, where layer i is just the function g_i and z_i is a vector. It has a dimension d_i, which depends on the layer. And those layer of dimensions need not all be the same and in particular, sometimes they grow and sometimes they shrink, uh, depending on the application. We, uh, um, sometimes write z_0 is x, and d_0 is d, the dimension of the number of, uh, features which are being embedded from the raw data. And so we still have, uh, a basic feature map which constructs x from u, x is Phi of u, but then we don't do feature engineering typically with a neural network, instead we allow the layers to provide features which are used by subsequent layers. In this game, the case of our three layer network, we have z_3 is y hat, the prediction, and then d_3 is the number of components of y hat, which will be m. So in particular, that means that the predictor input x and the predictor output y are also considered activations of layers. And we might visualize this as a simple graph where we have x coming in on the left passing through the function g_1 to generate z_1, which is passed into g_2 to generate z_2, which is passed into g_3 to generate y hat, which is just z_3. These layers have a particular form. Each layer is a composition of a scalar function h within an affine function. So g_i of z_i minus 1 is h composed with Theta_i transpose 1, z_i minus 1, and this 1, z_i minus 1. Remember that notation? That notation means if I have two vectors, a, b, that's the same as stacking them up on top of each other, and that means that Theta transpose 1, z means Theta transpose 1_z. The matrix Theta_i is- has dimensions d_i minus 1 plus 1 by d_i. It's the parameter for layer i. And, uh, the output of Theta transpose 1_z is- is a vector. h here is, uh, a scalar activation function. It takes real numbers to real numbers and so the way it is applied to the vector, the meaning of that notation is that it's applied element-wise. So in particular, if z_1 is equal to h of q, let me just call it z for the case of argument. Then that means that z_1 is h of q_1, z_2 is h of q_2, and so on. So when we have M layers in our neural network, we have m matrices, Theta_1 through Theta_n, and those are the parameters that we need to choose when training. [NOISE] And we can explicitly divide up the components of the matrix Theta i as follows. We'll write Theta i is Alpha i Beta i, like this. And here, Alpha i is, uh, uh, is a, uh, is a- a row vector. So that's Alpha i 1, Alpha i 2, all way up to Alpha i_ di, where each of these entries are scalars. But Beta i is a matrix and its, uh, its entries are Beta i 1 up to Beta i j, and each one of these is a vector of dimension d i minus 1. Now, if we take that notation and then we say, well, we're going to multiply Theta transpose by 1z, so in particular, for the ith layer, we're gonna take Theta i transpose multiplied by 1 z i minus 1. That works out to be this vector Alpha, plus a vector whose components are simply each of the form Beta transpose z i minus 1. So each one of these entries of that vector is the linear part, and then there's the constant part here. So just as we might expect for an affine function. An affine function in general takes the form A times z plus- let me not call them that, let me call them, uh, c plus D times z, where D is a matrix and c is a vector. And this is our c and this is D z. So then the layer map z i is g i of z i minus 1, means that we take each of the components of this affine function. Here, Alpha i j plus Beta i j transpose z i minus 1, and pass that through the activation function h. And this sort of function- this sort of function right here, it takes a vector z i minus 1 as an input, and returns a scalar which is the jth component of z i. Let's call it a neuron. People often write those like this. What's coming out here is z i j, and then we have coming in components of z i minus 1 1, z i minus 1 2, z i minus 1 3, and that's a single neuron. And uh, in- our layer has many such neurons, each one of which is giving out a different component. This will be z i 2, this would be z i 1. And this is fed by the same inputs. And so this would be uh, a layer which has two outputs and three inputs, and it consists of two neurons. Now these constant terms in the affine functions are called the vector of constant terms, is called the bias of the neuron. And so in this layer which has two neurons, there are two scalars, Alpha, i 1 and Alpha i 2. Those are the two biases at that layer, and the Betas are the input weights. Now. Activation functions, there are two commonly used choices for activation functions. Um, uh, they are chosen to be nonlinear. In particular, if they were linear, then g Theta would just be an affine function of z, which will be a linear predictor for a single layer. And in particular, if I compose a single layer g i, which is linear with the next layer g i plus 1. Well, I'm going to end up with this one non-linear layer, where the two with the- the affine part of the combined layer is simply a composition of the two affine parts. If I put a nonlinear function in place, then there is a difference between g1 composure to g2, which is a function which cannot be expressed as a single layer. So the most common activation functions are called the ReLu. We've also used this notation, a plus the positive part of the number a or the maximum a 0. And it's- it's this function which we've seen before. It's 0 when the input is negative, and a when the input is positive. The other very common activation function is this function, e to the a on 1 plus e to the a, which is a smooth function, which varies between 0 and 1. And 0, it's exactly a half. As a tends to infinity, it tends to 1, as a tends to minus infinity, it tends to 0. It's a scaled hyperbolic tangent function. [NOISE] We often draw explicitly the neurons in a neural network as a graph. Here's a- here's a- here's such a graph. Here we have, uh, a neural network which has three layers. And um, we have uh, this is the output. This vector here is uh, the feature vector x. Then we have z1 is this vector. And so this would be x which is equal to g0. We map that through- through g1 to get z1. Here we have z2, which we map z1 through g2 to get z2. And then we finally map through g3 to get Y hat. Um, and we can interpret on this graph, each of the edges, the lines that connect vertices within the graph as uh, having a corresponding parameter. So each one of them would have a corresponding entry in the Theta matrix. In particular, here, we're going to see, well, let's look at the one that's labeled Theta 1, 2, 4 relates the fourth component of z1 to the first component of z0, which is x. And so each edge has a Theta associated with it. But there were also Thetas that live inside the nodes which provide the biases. So it's only the weight terms that are associated with edges of this graph. Um, so each vertex here is- it's a component of an activation, and the edges are the individual weights. Uh, here's an example of the sort of thing you see if you construct, uh, randomly chosen set of parameters. You see one of the function that looks something like this. This is, uh, right here is- we have, uh, a three-layer neural network, uh, with two variables, x_1 and x_2 at the input. And, uh, as a result here we have Theta as, three, uh, three rows. The first row here has the biases, here we have some more biases, here are biases, and the remaining entries are weights. So the second, uh- the output of the first layer, the input to the second layer is a vector which has dimension four. Then, uh, the output of the subsequent layer has dimension 2 here. And then we have an output which has dimension 1, which is y-hat. And so the resulting functions are mapped from the two dimensions here to the one dimension at the output. Uh, it's the same terminology that's very commonly used for neural networks. If you have an M layer network, then layers 1 to M minus 1 are called hidden layers. Uh, layer M is called the output layer. Very often, if you're doing regression, you do not use an activation function on the output layer, or more specifically use the identity activation function in the output layer, h of a is a. Um, in the case of regression you can see why that is if you look back at the activation functions we typically use. If the output layer was using, for example, the sigmoid activation function, then it would only be able to generate output values or predictions between 0 and 1, which of course won't work if we were trying to predict y's which didn't behind that range. Or if we were using ReLU, then, uh, we'd only be able to predict non-negative values of y-hat. Uh, the number of layers M is called the depth of the network. Uh, and people refer to networks which have a large M and large varies, but at least 3 would be typical, um, as deep learning. Um, very often for neural networks which are used for say, image classification, we may have 15, 20 layers or many more sometimes, um. Now, when we are doing training with a neural network, we do regularized empirical risk minimization, as we've seen before. We pick the layer parameters, Theta_1 through Theta_m, to minimize the empirical risk, with the regularization term added. So the empirical risk is, uh, as always, the average loss function evaluated on the training data set, where here g_Theta is the neural network map from inputs x to output y-hat. Now the regularization term, uh, doesn't regularize the bias parameters, uh, the Alpha_i_j's, the constant terms in each of the layers. It only regularizes the, uh, the weight terms. Um, and his is because, uh, we have, uh, no need to regularize terms which don't affect the sensitivity of the network. Uh, common regularizes if sum of squares is very common, l_1 is also very common. Um, and in particular with l_1, we can expect to see sparsification of the- of the neural network and some of the weights will end up being 0. And that allows us to do what's called pruning the neural network, simply re- removing entries which have zero weight and then re-training. That might correspond to removing entire neurons or just removing particular paths through the network. Er, typically, for the- when you're using a neural network predictor, you cannot minimize the regularized empirical risk exactly. Um, uh, the- the stec- special case of a convex loss function, a convex regularizer and a linear predictor. That's, uh, a very special case in which one can exactly find the optimal solution. For the neural network predictor, then even if you have a- a quadratic loss and a quadratic regularizer, there are algorithm- there are no algorithms what will solve the problem exactly. Um, oh, so training algorithms instead find approximately optimal solutions. Uh, and these might be local- locally optimal, or they might be close to locally optimal. And we're gonna see what methods you use to do that. These are iterative training methods. Uh, and these can take a long time. Oh, and here's a particular example. Uh, this neural network has, I think four, or five layers. It's just got two-dimensional input and one-dimensional output, and it was trained on roughly 100 data points, some of which you can see in the plot. And you can see that it's done a interesting job at fitting it. Um, of course, we can't see all the data points because some of the points are underneath the surface. And uh, uh, um, it's- uh, it's hard to have any intuition for what either the meaning of the parameters is or how good a fit this is in the space of our, uh, possibilities associated with a particular size of the neural network that we have. Uh, all we can do to evaluate how good it is is validation. Uh, in Julia, this, uh, looks like this. Uh, this is a- a- a glimpse at the neural network regression function that was used to train the previous example. Uh, some things to notice about it. Um, uh, this here defines the network structure. So it says that, um, uh, we have, uh, three layers. The first layer, layer 1 has d inputs and 10 outputs, and a ReLU activation. The second layer has 10 inputs and 10 outputs, and also a ReLU activation. And the third layer has 10 inputs and m outputs, where m in this case is 1 and d in this case is 2. And it has identity activation because we're solving a regression problem. Now, what- what these- these um, these functions here, the function Dense actually returns a layer function. So when you call it Dense d, 10, the activation function relu, it will return for you a function f. We could just call it g_1 equals. And then we'll get g_2 is dense this, and g_3 is dense or whatever we called it. Then we might say our overall neural network, let's call it, uh, the predicted g. We would like to be g_3 composed with g_2, composed with g_1. Now, in Julia, you can actually type this, that's totally fine. If you're- if you happen to know where to get the Unicode symbol for the circle on your keyboard, and if like me, you don't, then you can just write chain of g_1, g_2, and g_3. And that's exactly what we're doing here. So that's just composing functions. And that means that if we want to apply the resulting predictor to an input, we just take g, which in this case we called model, let me just call it model. And we could do model of a vector x, and that would be y hat. And so y hat is model of x, will execute the neural network predictor and return for you that prediction. Now, the parameters, the Thetas, uh, they're not called Alpha and Beta in- in flux, and here we're using the flux library. So before we run this code, we need to type using flux. The top there, uh, the parameters are stored inside the layer functions, and the way one accesses those is the following. So in particular, I can- once I start- once I've composed model is chain G_1, G_2, G_3, I can get back the functions. I might get them back by saying g_1 is equal to model brackets 1. And that accessing that- the first component of model, even though model is not a vector model, it's really a composition of functions. It's been set up inside Julia in such a way that when I access the first component model, what I get back is the first function, the first layer of my neural network. And then I can access its bias and its weights. Its bias is called g_1.b and its weights are called g_1.w. And so g_1.b is a vector, we call the vector Alpha on the slides, and g_1.w is a matrix, we called the matrix Beta on the slides. And so you can see here that, uh, we have a regularization function which is constructing, taking the norm squared of the weights of each of the layers and adding those up. And that, uh- that's our regularizer. When we want to construct a prediction, we just call the model function on x that returns us back to the prediction. In particular, if I have an x and a y, to- to, uh, correspond to a particular record, then I can compute predict x- predict y- predict y of x minus y, and that's the error. And I might take the norm squared of that to compute the quadratic loss function. And here's my overall objective that I'd like to minimize. And this function train does the hard work of actually minimizing over the parameters and over the data, the cost function. Those are the things it takes. The cost function, the parameters, that's a special function in flux which extracts the parameters out of each of the layers. And, uh, there's the data which it iterates over. And here we return the model. And so we would call nnregression with an x and a y, and a regularization parameter Lambda. It will return for us a model. We can then compute the corresponding predictions ever our dataset. We have a function here called predictall. It does that. You give it a model and an X, which is a data matrix, and it iterates over each row of the matrix X, calls tthat x, and calls model(x) on it to give us a corresponding y hat. And then it stacks all those up to re- return for us a matrix which we might call capital Y hat would be predictall- predictall of model X. And so if we want to compute the- the RMS error, for example, of our predictions, we call predictall on our model, on our training data, and we compare that with our true training Y target variables and we compute the RMSE. And so Julia's flux library is really quite efficient and clean at, uh, allowing us to construct neural networks. And you can see that we can construct neural networks of any size we want and, uh, uh- and train them. Of course, I haven't told you yet how the training function works. And we're going to defer that to discussion of exactly what the train- how the training function works until the last few lectures of the class. And- but for now, we're going to use the training function provided with, um- provided with, uh, Julia, provided with flux. Now, one way to think about neural networks is that they really have a very similar form to the feature engineering pipeline. We start with an x and we carry out a sequence of transformations or mappings to that x, um, and eventually, we come out with a y hat. Um, the di- distinction is- is that feature engineering mappings are chosen by hand. They typically don't have any parameters or have very few parameters, and you can interpret what they mean. Um, we might say predict, we might- can do feature engineering by taking the product of two features to construct a new one. And we know very well that if we take the product of two features, then that product is going to be large when both of the individual components are large. Um, in- in contrast, the neural network mappings, well, those are a specific form that, uh, we're not choosing by hand. It's just a- it's just a category that we have, um, and we have lots of parameters. Um, and we're not choosing those by hand at all. The training processes is choosing the parameters, which is going to determine the stru- the resulting map. Um, and so one way to think about neural networks is that they're doing data-driven automatic feature engineering. So one of the very common ways nowadays of using neural networks is to use what's called pre- trained neural networks. Um, one, uh, trains a neural network to predict some particular target variable. Um, so say, for example, one is trying to distinguish, uh, different, uh, uh, features in the road from images for a self-driving car, and one has trained a neural network to classify a whole bunch of different road sign types and a bunch of different vehicle types, and bicycles, and construction, and traffic cones, and all the other things one sees on the road. And that training process would be done with a very large dataset and take some considerable time. Um, and then what you do is you say, well, I've got this neural network. It's very large. We're going to fix the parameters and take that la- the output of the last hidden layer and use those as features to do some other prediction. So after doing all that training, we suddenly realize that actually what we need to do is we also need to be able to distinguish things we hadn't thought of before, a particular type of road sign or a particular type of, uh, traffic obstacle. Well, then we don't retrain the parameters in the neural network that we have. Instead, what we do is we retrain only the last layer of the neural network, or we simply take all of the outputs of the first n minus 1- m minus 1 layers of our neural network and feed those in as features to some new neural network, or some new predictor, or some other form. And this actually works quite well, even if one is training for quite a different task. Um, if one has trained a neural network to do, for example, one type of image classification, then we're gonna often need to do a different type of image classification. Uh, this is called pre-trained neural networks. Uh, we saw two examples of these in the earlier sections when we looked at VGG 16 for images, and we looked at Word2Vec for English words. Both of those were pre-trained neural networks and we viewed them there as feature maps. And of course, if we're not training their parameters, that's precisely what they are. So let's summarize. Um, neural networks, training needs a lot of data. Uh, it can be difficult and it can take a great deal of time. And the resulting neural networks are not interpretable. However, they often work really well, uh, better than anything else for many problems, in particular, image classification. Uh, the, uh, best neural networks far outstrip any other methods that we have. Um, and, uh, one way to think about them is that they're, uh, automatic feature engineering. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_17erm_for_probabilistic_classif.txt | Hello, today we are going to discuss ERM for probabilistic classification. So the key idea is that we have a probabilistic classifier, G sub Theta which depends on a parameter Theta. It takes as input u, and it returns to us a probability distribution over the target set. And we're gonna choose the Theta by ERM, or regularized ERM as we usually do. And we want to judge the performance of the probabilistic classifier using the average negative log likelihood on our test dataset. And now our dataset, remember what it consists of, it consists of n points u_1 through u_n and v_1 through v_n. And we're produ- going to produce, uh, corresponding distributions, p-hat 1 through p-hat n, which are our predictions at each of those data points, u_1 through u_n. And then we're gonna to compare our predictions, p-hat_i, with the true value v_i. So in order to do that, we need a loss function, and the loss function has to be able to compare two things, a p-hat and a v. And the p-hat is a distribution on our target set script v, and the v is actually an element of script b. And so, um- so we're comparing things that aren't really like. Um, one thing to notice here is that p-hat is actually a probability distribution, is a function. And so we're feeding in to this function l, another function, and a v, which is just, uh, one of our possible targets. And so once we've got such a thing, uh, we are, uh, going to look at the average loss 1 on n, sum from i is 1 to n, the loss of p-hat_i and v_i. And we're gonna choose the Theta to minimize the average loss. And if we're doing regularized ERM, then we'll have our regularization, uh, function r of Theta, and we'll minimize the average loss plus lambda times r of Theta. Now the particular, uh, uh, loss of interest is going to be this thing, l_ce. Um, and what it is, is it's the negative log likelihood of v under the distribution p-hat. And so remember that we computed the negative log-likelihood of our entire set of p's and v's. Uh, assuming the- all of the v's were independent. Um, and so here we're just looking at one of those for a single data point, and we're looking at the negative log-likelihood of getting that particular v under a particular distribution p-hat. And this is called l_ce, the ce stands for cross entropy. So this is also called the cross entropy loss. And it's just the negative log of the probability of the outcome v. When you look at this, it's important to realize that it really is a function of two things. It's a function of p-hat, and it's a function of v. And that function is, of course, a composition operation composed with a negative log operation. We're taking p-hat and v and we're evaluating p-hat at the point v. Um, and of course we can do that because p-hat is a function. Um, now because, uh, p-hat is, uh, less than or equal to 1. Well, the negative log of it must be greater than or equal to 0. And the only way you can have the- the - the negative log equal to 0 is if, uh, p-hat is equal to 1, that is means we would- that would mean we will be completely certain about the outcome. In other words, the probability of getting that particular V, uh, would be, uh, would be 1. And the probability of getting any other v, therefore would be 0. So that's when our prediction is that the true v is actually the v we're evaluating at and with 100% probability. Um, and so we want the negative log-likelihood, uh, to be small so that, uh, it's much as- as close as possible, we're getting p-hat of v close to 1. Now, um, this cross entropy loss, it's a- is a loss function, but it's a loss function which is aimed specifically at probabilistic prediction, um, uh, and this is a little different from when we did ERM for either deterministic classification or for regression. Because there we had a loss function which was a function of y-hat and y, and, um, it was comparing y-hat and y, the predicted value y over the actual value y. And those are both same kinda quantities. They were both vectors in r, m. And here, this is a little different in two ways. First of all, our prediction isn't a y-hat, but it's a p-hat. So it's a function, it's a probability distribution. And the second is, is that- the- the- the second argument of the loss function isn't an embedded v, it's actually the raw target v. Um, and of course, the raw target v is simply one of the, um, one of the original target set script v. Um, and we don't need to embed it into, uh, uh, into rm in order to be able to evaluate a loss, and so we don't have to do that. So we can certainly compute the empirical risk on the entire dataset now that we've got a loss function. We simply compute 1 on n, the sum from i is 1 to n of l_ce, of, uh, the p-hat_i, that's with v_i. And p-hat_i is the prediction at the ith value of u, and v_i is the true value of the target variable in the ith datapoint. And p-hat_i is itself generated by the predictor. It's G of u_i. Um, one nice thing about this is that this is actually our performance metric. And so, uh, remember that one of the things we did in regression and deterministic classification was that we had something we really cared about, such as the Neyman-Pearson Loss, but we couldn't actually minimize that. And so we used a proxy loss. We used a loss function which was a replacement for the true quantity of interest, the true performance metric. And here we don't need to do that because the average value of the cross entropy loss is the average negative log-likelihood, which is a performance metric that we care about. It is the probability of seeing those data points v_i under p-hat, or it is the likelihood of the joint distribution, p-hat 1 through p-hat n. Um, uh, the, uh, now remember we've seen already that, uh, when you have a constant predictor, that means p-hat_i doesn't depend on i. Um, then you can work out what this quantity is, and that is the cross entropy. So the average cross entropy loss is the cross entropy when we had the constant. So when we're interpreting this, uh, we think about it as a measure of implausibility. The cross entropy loss of p-hat and v is large when v is implausible and, uh, distribution p-hat. And it's small when v is likely under distribution p-hat. Uh, and so people have names for this kind of thing, people might call it surprise or perplexity. Now, um, we would like to be able to do, um, predictors, to use predictors which generate vectors, right? So we've certainly seen that if we've got a tree, then we can simply label the leaves of the tree with probabilities, probability distributions, and that's enough, we can simply evaluate the output of that predictor in the- uh, uh, in the loss function. Um, uh, however, if we've got, say, a linear predictor, um, or, uh, uh, um, certain other types of predictors, then the predictor produces a- a vector. And, um, of course, we can't just use that vector as a probability distribution, it might not be non-negative, it might not sum up to 1. Um, and so, um, we need a way around that. If we were doing point classification, what we would do is we would unembed a vector y hat in R_K, um, by, uh, just, uh, picking the corresponding target to be v_i where i is the index of the representative Psi i which is the closest of the representatives to y hat. Um, and that is great, it maps the vector y hat in R_K into the target set script v. Um, that's not quite what we want to do here, we want to unembed and map y hat into a probability distribution on the target set script V. Um, and there's a- a- a way of doing this which is very common, and that's the following on embedding, it's called the logistic map or the soft-argmax function. Um, and what we do is we define p hat of v_k to be the exponential of y hat k divided by the sum of the exponentials of y hat j. Um, and this is a map which for each k gives us p hat of v_k, it maps a vector y hat in R_K to a probability distribution. And you can see that if you sum from k is 1 up to capital K of p hat v_k, you get 1. And you can see that because it's the ratio of an exponential and the sum of exponentials, it's a non-negative number. So it certainly satisfies the two fundamental requirements for a probability distribution. And this map Sigma, it has lots of names. I guess the community cannot agree as to what the right thing to think of it, right way to name it is, and people call it the logistic map, people call it an activation function, and they use it as an activation function in the neural network. People call it the inverse link function, the softarg max function, the normalized exponential, or the softmax function. Um, let- let's take a look at it slightly more closely. Um, so, uh, what does it do? Well, the exponential is mapping the real line to the non-negative half of the real line. And so, um, what the exponentials are doing is we're placing all of the entries, all the components of y hat with non-negative numbers. And then the division, all that's doing is normalizing it, it's arranging for the components of p hat to sum up to 1. Um, and so we can see that we are getting a probability distribution. Uh, you can also see that, for example, if you take y hat, your predicted output of your predictor, and add a constant to each entry, but then it doesn't affect p hat. Um, you can also see that if you increase one of the components, say k, uh, if you can increase y hat k, then that will increase the probability at p hat v_k and decrease all the others because the others have to be chosen to sum up to 1. And so it is, um, larger y hats correspond to larger probabilities. Um, but it can never equal either 0 or 1. Um, uh, p hat v_k is close to 0 when y hat k is much smaller than all the others. And p hat v_k is close to 1 when y hat k is much larger than all the other components of y hat. There's one more special case which is when y hat is 0. And when y hat is 0, well then, you end up with p hat of v_k is the uniform distribution, y over k. And in fact, if y hat is, uh, a vector with all of its entries equal, then you'll also end up with the uniform distribution. So now, we can do ERM with logistic un-embedding. Um, um, let's just compare with what we would have done in the deterministic case. In deterministic classification, we take our u's and our v's, we embed them as x's and y's, and then we use ERM to minimize, uh, the loss evaluated at g of x_i and y_i, the average loss over all of the i's. And then we get the resulting predictor, um, uh, by composing it with the embedding and the unembedding. So you give me a, uh, particular u, this should say u, so this should say v hat is Psi dagger g Theta Phi of u. Uh, then I- I compose, I take u, I apply the embedding trick to Phi of u, I apply that predictor g Theta to that, and then I unembed, uh, the resulting output of G Theta to give me a v hat. Uh, and same here, this quantity here should be a u. Um, for probabilistic classification, we don't embed v, we embed, uh, uh, u using Phi. And, uh- and then what we do is we minimize the average loss- the average cross entropy loss. And here are the two entries in the cross entropy loss. One of them is simply v, the true v, and the other one is the unembed value of the output of the predictor. So g Theta of x_i is going to produce a y hat, and Sigma of g Theta of x_i is going to produce a p hat. And so the resulting predictor is given by, uh, uh, Sigma composed with g Theta, composed with Phi of u. So the role of the unembedding here is a little different. All right, the unembedding lives inside the loss function in probabilistic classification. And its role is to take those vectors that come out of our linear predictor, g Theta, and turn them into probability distributions. Let's take a closer look at our loss function. Um, let's suppose that, uh, we're unembeddings in the logistic map, the logistic unembedding. Where- so we got a predictor that producers a y-hat and we pump it through the function Sigma to get a probability distribution p-hat. And now let's look at the cross-entropy loss and say we've got such a p-hat, and let's evaluate what the cross-entropy loss at p-hat v_k is. So we've got a particular v, say, v_k, that we're evaluating at. So what is that? Well, the cross-entropy loss just says evaluate the probability distribution p hat at the point v_k. So it's just the Kth entry of that probability distribution. That's just the negative log of, uh, the exponential of y-hat k divided by the sum of the exponentials of all the entries. And that's the cross-entropy loss. We could simplify that slightly. Um, it's going to be minus y-hat k plus the log of the sum in the denominator. And this is an expression that we've seen before. This is simply the logistic loss when k_i is 1. And so the logistic loss, which we used for deterministic classification, is exactly the same as the cross-entropy loss when you use the logistic unembedding. And so if we're computing one, we're computing the other, since we're doing exactly the same thing. And so when we look at the empirical risk that we are using, uh, for, uh, logistic regression in the deterministic case, that's exactly the average negative log likelihood. And we can simply use, uh, the y-hats that come out of logistic regression, those predictions are vectors. But instead of unembedding them using the nearest neighbor map, we unembed them using the logistic unembedding, p-hat is Sigma y-hat. And that way, instead of getting a deterministic classifier, we get a probabilistic classifier. So that's nice. It says that, in some sense, what we've been doing when we were doing logistic regression is actually computing a probabilistic classifier all along. And now we know how to compute the probabilities. Um, now, we can - we can interpret this. If we, uh, uh, look at what p-hat is, it's, um- well, it's Sigma y-hat. And let's use this in vector notation. X of y-hat here means the element-wise exponential of the vector y-hat, and 1 transpose is doing the sum for us. 1 transpose x by y-hat is summing over all of the exponentials. And y-hat here is Theta transpose x when we're doing logistic regression and we're using a linear predictor. The next one is one is the constant feature, and we're standardizing all the other fea- features. Um, and that means that the first raw of Theta, Theta 1 transpose, is y-hat when x2- 2d is 0. So all of the non-constant features are taking their mean value. And the corresponding distribution is Sigma of Theta 1, um, then Theta_ij gives the effect of x_i on p-hat_j. And so that tells us in particular that if we, uh, have a very large component of Theta_ij, then x_i is going to be a significant effect on the probability distribution p-hat_j. Now, we can also do exactly the same for Boolean classification. We can do Boolean probabilistic classification and we could just use the methods we've seen so far. Um, uh, but there's one special wrinkle that people like to do for Boolean classification, um, and that is, take advantage of the fact that, when you've got a probability distribution over two quantities, you only actually need to specify that by one number rather than by two numbers, because the other one has to be 1 minus it. So if v is- the target set is v_1, v_2, um, we're gonna guess p-hat is g of u. Um, er, instead of, uh, just giving p-hat comp- uh, compound of v_1 and p-hat of v_2, we can just give one of them since they must sum up to 1. And so we might give p-hat of v_2, which is the probability that v is v_2, and just define p-hat of v_1 as 1 minus p-hat of v_2. Now, if we're only going to, uh, need one number, one probability, then we only need one component of y-hat. So if we're using, uh, a predictor which is generating a vector in R k, but we're reducing ourselves to, instead of needing two probabilities, needing one probability, we can just pick k is equal to 1 or m is equal to 1. And then we've got a scalar produced by our predictor, a y-hat, and we want to generate, uh, a probability out of that scalar. So that y-hat could be any real number. What do we do? We take 1 on1 plus e to the minus y-hat. This is also called the sigmoid function, and we will denote this by also Sigma. And it's giving us a number which is between 0 and 1. It's, uh, uh, 0 as y-hat gets large and negative, and it becomes 1 as y-hat becomes large and positive. Um, and so we're saying, okay, well, now we take y-hat, we take Sigma of y-hat, that gives us a number between 0 and 1, which we'll use as our probable- prediction probability for target value v_1. And to get the prediction probability of target value v_2, we'll just take 1 minus Sigma of y-hat. And so we've mapped a real number to a distribution on v, which is exactly what we needed to do. Um, and so this is called a sigmoid function. Um, the inverse map, which allows you to map from the probabilities back to the y-hat, we don't need to do that for, er, classification, but that- it has a name. It's called a log-odds- log-odds or the logit function. And here's a- a plot of the sigmoid. You can see it goes between 0 at minus infinity and to 1 at- plus infinity, and it's half when y is 0. You can also see the symmetry. Um, that symmetry is that if I take it and rotate it about this, uh, center point right here, then, uh, I get the same function. And that's expressed neatly here, that Sigma of minus y-hat is 1 minus Sigma of y-hat. So let's compute exactly as we did in the multiclass case. Let's compute what happens if you use the sigmoid unembedding and you evaluate the cross-entropy loss. So we've got the cross-entropy loss is the negative log of the probability at v_i. And well, we know what Sigma of y hat i is. If, uh, uh, uh, and, um, we need to, um, it's either going to get- it's gonna give us two numbers as our distribution, it'll give us Sigma y hat i, um, for the probability that v is v_1, and it'll give us one minus Sigma y hat i for the probability that v is v_2. And so we're- when we're computing the cross-entropy loss, we need to evaluate this probability at v_i, and so if v_i is v_1, well then we get the first case, minus log Sigma y hat i. If v_i is v_2, then we get the second case. And now, we go through and do the algebra, and we see, well, log of Sigma is- uh, sorry, negative log of Sigma is log of 1 plus e to the minus y hat, and this is also something that we've seen before. This is the Boolean logistic loss. And so when we were doing Boolean classification in the deterministic case, we were also behind the scenes doing probabilistic classification. And the way one gets the probabilities is by taking the y hat that comes out of the predictor and feeding it into the sigmoid function. And this is the corresponding empirical risk minimization problem. One on n times the average of- sorry, the loss, the empirical risk is the average of the cross-entropy loss, and the cross-entropy loss splits into two cases, the cases where v_i is v_1 and the cases where v_i is v_2. And we choose Theta to minimize the empirical risk as before. And once we've got Theta, Theta transpose x is y hat, and then Sigma Theta transpose x is the probability that v is v_1 at x, and 1 minus Sigma Theta transpose x is the probability that v is v_2 at x. And this is simply expressing the same thing in a more convenient notation, um, where we've used the vector notation here. Let's look at a couple of examples. Here, we have, um, a two class problem, three different two class problems, we've done empirical risk minimization. And here, we've plotted the data points, the red and the blue data points, there's no test or train set here, we're just using one dataset, and there's no regularization. And we've plotted here in color, in the shading, the corresponding probability distribution, Sigma of Theta transpose x. And you can see that it does what we expect it to do. In the middle, here, along this line, this uncertainty and the probability is half. As we move this way, the probability of red becomes 1. As we move this way, the probability of red becomes 0 or equivalently, the probability of blue becomes 1. Um, you can also see that here, in this example in the middle, we've got data points where the blue and the red overlap considerably. And as a result, the distribution goes quite smoothly and slowly between being quite certain over here that it's red and being quite certain over here that it's blue, and then probabilities that are somewhere between 0 and 1 in this middle region. And there's this wide band of uncertainty where the predictor cannot make a very definite prediction as to whether the target variable is going to be red or blue. Here's a case where it's much more separated and, um, there is a narrow band of uncertainty and then very definite blue, very definite predictions of red. And this is the advantage of using a probabilistic predictor over deterministic predictor is that the probabilistic predictor, when it is giving you the prediction of probability which is close to a half, well then, you can say, "Well, we're in some region where we can't make very certain predictions," and we're on the boundary and the reality is it could be either a blue or a red point. You can also see this in the matrix in the vector Theta. So here, the norm of Theta is rather small, and here, the norm of Theta is much larger. When the norm of Theta is large, well then, Theta transpose x varies rapidly with x, and so Sigma of Theta transpose x varies rapidly with x, and the probability changes quickly as we move in this direction. Over here, because theta has a small norm, then the probability changes slowly as we move in the corresponding direction. Here's a three class case, this is the Iris dataset. And remember what we had where in this Iris dataset, when we look at these particular two components of the data, we find that the red points are well separated from the green and the blues, but the green and the blues overlap considerably, and it's hard to tell a green from a blue. Um, so on the left-hand plot, we're seeing multiclass logistic regression, we're unembedding using the nearest neighbor unembedding, and this is deterministic classification. And in the right-hand plot, we're unembedding using the logistic map in a Sigma of Theta transpose x. And we're getting out of that a three-dimensional probability vector which gives us the probability of red, the probability of green, and the probability of blue. And what's showing up here is this- is the level of certainty of prediction. Over here, red is really very certain. Then when we transition across this line, we end up in, uh, a region where there's significant uncertainty. So over here, our prediction is 1, 0, 0 or close to that, and over here, our prediction is 0, a half, a half, and there's much more uncertainty. We know it's not red but we can't quite tell whether it's green or whether it's blue. So let's summarize. We use the average log likelihood on the test data to judge our probabilistic classifier. This is our performance metric and it happens to equal the empirical risk when we're using the cross-entropy loss. And there's a loss- so there's a loss function which exactly equals to our preferred performance matrix. Um, and once we've- if we're using a linear predictor, then we can unembed that prediction into a distribution using the logistic unembedding, and it gives us probability distributions. And the deterministic methods that we've seen for classification using either Boolean logistic loss or multiclass logistic loss with the one-hot embedding are exactly the same ERM optimization that we do when we do probabilistic classification. But to construct the probabilities, instead of unembedding using the nearest neighbor map to get the deterministic prediction, we unembed using Sigma to get the probabilistic prediction. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_10_non_quadratic_regularizers.txt | Hello, and welcome to the section on non-quadratic regularizers. So remember the idea of regularization. We want to choose our Theta which both minimizes the empirical risk and also makes the predictor be not too sensitive. So I've got an x near an x tilde. Then we'd like g Theta of x to also be close to g Theta of x tilde. And the reason for this deduction in sensitivity that we're going to put in there is that if you make the predictor too sensitive, then what happens is you end up with something that doesn't generalize well. And so this is a way of forcing the predictor to be insensitive to the data. And that makes the predictor generalize better, it's a way of preventing overfitting. And so what we do in order to achieve this is we use a regularizer, our function R, which is a function of the parameters Theta, and it's a real valued function of R, which measures the sensitivity of g Theta. So that in particular when Theta is large, then our function- our regularizer function is also large. And so by making the regularizer small, we'll make Theta small and thereby we'll make the sensitivity of g Theta small. There's another way of thinking about this, and this is very common in the statistical literature, in the statistical view point. And that is that the regularizer's encoding some prior information we have about Theta. Specifically that the regularized r of Theta is actually small. And so this is a way of saying, well, we believe that the Theta that generalizes, uh, that could- that corresponds to the true predictor underlying the data, the true model of the data has a small Theta. And so we're going to enforce that. In our learning algorithm, we only look at Thetas that are actually small. And that's a completely, uh, uh, different way of looking at, uh, the purpose of the regularizer, but it's equally valid. In both cases, you want both the empirical risk l of Theta and the regularized r of Theta to be small. So a regularized empirical risk minimization, we choose Theta to minimize the empirical risk, l of Theta plus some positive constant Lambda multiplied by r of Theta, the regularizer. And remember Lambda here is called a regularization hyper-parameter, which we can trade off L of Theta against r of Theta. And we choose it by validating against data in a separate set called a test set. And the- the trick of all of this, of course, is that this actually works. It works in the sense that by enforcing regularization, we end up with worse performance on the training set, but better performance on- on the test set and it's test set performance that we actually care about. Now, when you're constructing regularizers, we've seen so far the two-norm used as a regularizer in ridge regression. And a very common format for regularizers is to have a penalty function q, which is a function which maps the real numbers to the real numbers. And the regularizer with Theta is simply q of Theta_1 plus q of Theta_2, all the way up to all of our parameters, Theta_p. And so we penalize each of the parameters, each of the components of Theta separately and add up the corresponding penalizations. And normally we choose these penalty functions q such that they are non-negative. And so that they're only 0 when Theta_i is 0. And q of Theta_ i is therefore expressing our displeasure in choosing the predictor coefficient Theta_ i. And in particular, by defining it so that it increases with the value of Theta_i, it expresses the fact that we prefer small Theta_i's of a large Theta_i's. And so we've seen the case where we have q of Theta_y is just Theta_i squared. That's the sum of square regularizer or ridge regression, it's also called a quadratic regularizer, the Tikhonov regularizer, or the l_2 regularizer. And it's the q square function, it's just q squared of a, to a squared. And with that penalty function then the regularizer is just the norm of Theta squared. And in particular, it's the 2-norm of Theta squared. Another very common regularizer is the sum absolute function, um, where the penalty function that we're using is q_abs, the absolute value function. And that way, the regularizer function r of Theta is the sum of the absolute values of the components of Theta, that's called the 1-norm of the vector Theta. We'll call that sum absolute regularization or l_1 regularization, or lasso regularization. Ah, another one we might see is the non-negative regularizer where the penalty function is 0 when a is greater than or equal to 0 and it's infinity when a is less than 0. And that's, uh, a very hard penalty. It says that the only Thetas that we will accept are Thetas for which all of the components of Theta are greater than or equal to 0. So in other words, we are enforcing the fact that the predictor coefficients have to be non-negative. And any minimizer, which minimizes L of Theta plus r of Theta, where r has this form. The only way you can have a minimizer as if the resulting Theta is, uh, non-negative. Now, let's, uh, uh, look at this in the context of sensitivity. And, uh, suppose we've got a- [NOISE] a- a linear predictor, g_Theta of x, which is Theta transpose x. And this here is, uh, uh, we're predicting a scalar y. And let's suppose that the feature vector x changes to x tilde, which is x plus some Delta. And Delta here is the perturbation or change in x. And we'll assume for now that any Delta, uh, is possible. But we're gonna look at the set of, uh, perturbations, capital Delta. And, uh, we're only going to allow perturbations, little Delta that live in this capital Delta set. We'll call it the feature perturbation set. And, um, we can say, well, suppose Theta is a predictor parameter and x now changes to x plus Delta, what happens to our prediction? In that case, the prediction becomes Theta transpose multiplied by x plus Delta. And so the change in the prediction is going to be Theta transpose x tilde minus Theta transpose x, which is just gonna be Theta transpose Delta. So if we look at the absolute value of that quantity, the absolute value of Theta transpose Delta, that's the magnitude of the change in prediction. And we can ask ourselves, "Well, how big can this be when we're allowing Delta to range in some set capital Delta. And we can look at the worst-case sensitivity, the maximum over all Deltas in capital Delta of the absolute value as Theta transpose Delta. This is a measure of sensitivity, right? If we for some specific set capital Delta, then we can say, well, this quantity tells us how much the prediction can change in the worst case. [NOISE] So let's look at the case where we're looking at l_2 perturbations. In other words, we're looking at the script, the set capital Delta to be the set of all perturbations little Delta with 2-norm less than or equal to Epsilon. Epsilon is some number. And this is a ball, this is a- a sphere in a Delta space. So the set of Deltas which are in capital Delta is a sphere- capital Delta is a sphere. [NOISE] And this is called an l_2 ball. And this means that the feature vector x can change to any x tilde within distance Epsilon. And in particular, we're measuring distance with the 2-norm. Now, as we've seen before, the Cauchy-Schwarz inequality tells us that the absolute value of Theta transpose Delta is less than or equal to the 2-norm of Theta multiplied by the 2-norm of Delta. And the 2-norm of Delta we know has to be less than or equal to Epsilon for all Deltas in our lab set. And that means that the absolute value of Theta transpose Delta is less than or equal to Epsilon times the 2-norm of Theta. So in particular, the worst-case sensitivity is therefore Epsilon times the 2-norm of Theta. And maximizing the change in the prediction at a- all possible Deltas that we're allowed to choose, the one that maximizes the change in the prediction is, um, Epsilon divided by the 2-norm of Theta times Theta. In other words, we take Theta and we normalize that vector, and multiply by Epsilon. And so this is a justification for the sum of square regularization. If we are concerned about mi- minimizing the worst case sensitivity to changes in x, which are in a unit l_2 ball, then the sum of square regularizer measures exactly that. It measures how badly can much- can the prediction change when we know that x can only change by an amount Epsilon in distance? Now, that's a way of interpreting the purpose of l_2 regularization. And it's, uh, it says what you're doing is you're minimizing the worst-case response to a particular class of perturbations. Now, if we look at the L-infinity perturbation class instead, well, then we'll get a different appropriate choice of regularizer. So the L-infinity class of perturbations, that's this allowed set of Deltas, allowed set of changes to x, which- for which each of the components Delta_i has absolute value less than equal to Epsilon. So people call that an L-infinity ball, but of course really it's just a cube, that you might call a hypercube in d dimensions. And so capital Delta here is a cube. And we're saying that you can take an x and perturb it by a vector which is in this cube, and the- the widths of the sides of this cube are 2 Epsilon. Uh, and the reason it's called an L-infinity ball is that it has exactly the same form as the- the usual ball. It's the norm of Deltas as less than or equal to Epsilon as the defining characteristic of this ball. But here the norm is a different norm, instead of being the 2-norm, it's the infinity norm. And the infinity norm of a vector Delta is the maximum of the absolute value of the components of Delta. So this is a very complicated way of defining a cube in terms of a norm. Um, so this means that any component of the feature vector can change by up to Epsilon. So instead of saying, we're going to think about changes to x as being changes to the vector x and they all live in the sphere. And if you're gonna have changes in the sphere, it means if you change a lot in one direction, then you can't change very much in any other direction. Here we're saying you can change all of the components separately, and you can change all of those components by up to Epsilon. And with that kind of model, how big can the absolute value of Theta transpose Delta be? Um, now, there's a way of maximizing the change in the absolute value of Theta transpose Delta when Delta is required to be in a cube. And the idea is, is that you take the Thetas and you choose your Deltas to be Epsilon times the sine of the Thetas. So if Theta_i is positive, then you will choose Delta_i to be Epsilon. And if Theta_i is negative, then you'll choose Delta_i to be minus Epsilon. And if you do that, then what you get is that the change in the prediction Theta transpose times Delta, well, tha becomes Theta transpose times sine of Theta, where sine of Theta is applied element-wise to the vector Theta, all multiplied by Epsilon. And that becomes Epsilon times the sum of the absolute values of Theta, which is Epsilon times the 1-norm of Theta. In other words, if x is allowed to change by any- change any component by Epsilon possibly or the components by up to Epsilon, the worst-case change you're going to have in Theta transpose x is Epsilon times the sum of the absolute values of the Theta_i's. And if you're concerned about those kinds of perturbations happening to x, then your regularizer that you should choose is the sum of the absolute regularizer. So r of Theta should be the 1-norm of Theta, which is the sum of the absolute values of Theta_i. Now, this is all well and good. This is a- an- an analysis of what it means to penalize this particular- these particular regularizers. Um, er, it's important not to take this too far or too seriously, right? This is, uh, certainly gives us a way of interpreting the meaning of choosing these particular regularizations. But it doesn't give us a way of choosing which one we should apply in a particular machine learning problem. The way we would do that is, well, there's two ways. One is that we gain some experience with a wide range of machine learning problems. And we see that for some types of machine learning problems, the 2-norm regularizer tends to do better. And for other types of machine learning problems, the 1-norm regularizer tends to do better. And we'll have something to say along those lines in this section. The other way is even more pragmatic; and that is to say, validate. You want to know which one's better? Try both of them, compute the performance, compute the empirical risk on your test set, and see which one did better. [NOISE] So we have now, uh, two types of regularization that we're going to focus on. One is 2-norm regularization. And if, uh, we're using the square loss, remember, which is l of y hat y, is the square of y-hat minus y. Again, when y is scalar. Then if you choose Theta to minimize l of Theta plus Lambda times the two-norm of Theta squared, that's called ridge regression. The other one we'll focus on is where we're using the one-norm of Theta as our regressor. And then we're gonna be minimizing empirical risk plus Lambda times the 1-norm of Theta, and that's called lasso regression. Uh, this was invented at Stanford by Robert Tibshirani in 1994 and lasso regularization- lasso regression is actually widely used in advanced machine learning. Um, we saw for ridge regression that even though it looks like an extension of least squares, the problem actually reduces to a least squares problem and we can apply the least squares formula to solve explicitly for the optimal Theta. Um, lasso regression doesn't have a formula. There is no analytical expression, there's no algebraic expression for the Theta that minimizes the regularized empirical risk when the regularizer is the one-norm. When the predictor is linear, then if we're using square loss, both ridge regression and lasso regression can be efficiently computed. For ridge regression, there's a formula. For lasso regression, we have to use numerical optimization. But these numerical optimization methods are extremely efficient and always converge to the global optimal solution because the problem is convex. Now, when we have a constant feature, so we have x_1 is 1, and the predictor coefficient Theta_1 is the offset, again, when the predictor is linear. So in that case, we have g-Theta of x is Theta_1 plus Theta_2, x_2. And so x_1 doesn't change even when, uh, x changes, uh, we did not consider perturbations to x_1 because x_1 is the constant feature. So Delta 1 is always considered to be 0. As a result, Theta_1 does not contribute to predict a sensitivity and as a result, we do not regularize the associated coefficients in either one. And so we modify both regularizers to only include terms from Theta_2 to Theta_d. We either take the 2-norm of Theta_2 to Theta_d or the 1-norm of Theta_2 to Theta_d. Now, as a specific property of certain regularizers which is very useful and that's to do with sparseness. So if I have a linear predictor, g Theta of x is Theta transpose x, where Theta is sparse, sparse here means that many of the entries of Theta are 0. So the number of non-zero components of Theta is small compared to the length of Theta. Then, um, as a result, the prediction, Theta transpose x doesn't depend on some of the features x. In particular, it doesn't depend on those features x_i for which Theta_i is 0. Um, it means that, well, if I've got a predictor that only uses some features, rather than thinking about it as g Theta of all of x, I can think about it as g Theta of just those particular components of x, those with which Theta_i is non-zero. At least I can have practical benefits. Um, uh, in particular, um, you can make the predictor simpler to interpret. Instead of having a prediction as to somebody's illness based on 1,000 different measured properties and diagnostic tests, we might have a prediction which is- actually only depends on a few and that can be much more useful. It's also much cheaper if I want to- decide that I'm going to try and diagnose somebody's health using such an algorithm. If I only have to run four or five blood tests rather than 50 or 60, that's a huge savings. Um, so the predictor is simpler to interpret, it's cheaper to actually execute, um, and you can actually get better performance. And this happens when some of the components x_i, some of the regressors, are actually irrelevant. So imagine we're in a situation where we've got y and we've got x. And some of the components of x actually don't affect y at all. We might be trying to, uh- uh, to fit house price. And we've got various components of x, such as lot area, lot size, number of bedrooms. But we've also got components of x, such as the current weather in Barcelona, which may be totally irrelevant to the house price in Mountain View. Um, but even if it's irrelevant, our learning algorithm may assign that regressor, that component, a non-zero Theta, because it achieves slightly be- better performance on the training set with a non-zero Theta than, uh, it does with a 0Theta. And, uh, that's totally reasonable and that's exactly what will typically happen. The learning algorithm can't tell the difference between components of the variables x, which are just noise and components which are actually meaningful. If choosing non-zero components makes the empirical risks more low, that's what it will do. Um, but of course, it's total nonsense from a practical perspective. We've got a- a predictor that we know is fitting the wrong thing. Um, and so if we could somehow remove those irrelevant regressors, then we might not achieve better training loss, which we won't do. But we might achieve better test loss. And so by- if we can somehow induce sparsity in our linear predictor- if we can induce sparsity in a linear predictor, then it may enable us to remove those components of Theta, which don't really matter, but just happen to be correlated the right way with the training data. This idea that we're going to choose the sparsity pattern of Theta is called feature selection and there are several different ways of carrying it out. One, in particular, is call- is to use l_1 regularization. And it is a- an important property of l_1 regularization that it leads to sparse coefficient vectors. In other words, if we use R of Theta as the 1-norm of Theta and we minimize empirical risk plus Lambda times the 1-norm of Theta, as a consequence of that structure, that structure of that optimization problem, we will tend to produce Thetas that are sparse. So one explanation for this, when we're using a square penalty, well, once you made Theta small, Theta squared is really very small. And so the incentive for the sum of squares regularizer to make a coefficient smaller decreases once Theta has already become small. For the absolute penalty, that doesn't happen. Once, u,, Theta becomes small, well, it- it- Theta squared may become very small, but the absolute value of Theta_doesn't necessarily become very small, it just has the same scale as Theta. And so the incentive to make Theta small continues until the Theta is actually 0. Of course, that's a very rough explanation. Uh, one can do a more sophisticated mathematical analysis to show that in fact, the 1-norm of Theta produces sparse Thetas when one uses the 1-norm as a regularization. Uh, we're not gonna do that in this class, but we will see it in practice. So here's an example. On the left-hand side, we have ridge regression, and on the right-hand side, we have lasso regression. And so the left-hand two plots are the usual regularization path plots that we make. Here, we have the test loss, which increases as a function of Lambda, and we have the training loss. And in this case, we see that the training loss also increases with Lambda. We might sometimes get a small dip in the test loss. We also see in the bottom plot the usual shrinkage, we see the magnitude of Theta in the size or the various components of Theta with a particular value of Lambda. And we see that those numbers get smaller as we increase Theta- as we increase Lambda. Now, on the right-hand side, we have the same plots, but for lasso when we're using l_1 regularization. And here, we can see very similar training loss. One noticeable feature is that there's a sharp corner there, and that's a characteristic feature of, um, lasso regression. We also see that the test loss has a much more marked dip with a significantly smaller test loss than in the l_2 regularized case. Um, now, why is this? And the reason is, is that the lasso regression is eliminating the irrelevant features. Let's look at the regularization path for Theta. What you can see is that as we increase Lambda, the components of Theta tend to 0. And they tend to 0, and hit 0 exactly, and then stay there. And they hit 0 exactly at a particular value of Lambda. And each one of them will hit, um, uh, Lambda one after the other, and, uh, eventually would end up with a completely zero- exactly zero predictor, which will be, in this case, for Lambda here which is a little bit larger than 1. For Lambda a little bit smaller than 1, we have a predictor that consists of two non-zero components of Theta. And back here, we have three, and once we're back to here, we have maybe 20 non-zero components of Theta. And this sudden hitting of zero exactly is what causes the sharp corner in the loss path as well. And so the lasso regression is selecting features for us. It's selecting those features which are relevant to the training problem. And it's removing those features which are irrelevant, and it's removing them exactly. Now, because they are, the features that I removed are irrelevant, they are simply noise, um, fitting them to the training data may improve training loss but it will have the downside, it will inherently make the test loss worse. And that's why we see such a distinction between the lasso test loss and the ridge test loss. Here are the results. So on the top, we have ridge regression with the square regularizer, and on the bottom plot, we have lasso regularization with the absolute regularizer. Then on the top, we can see that this is- so this is a plot of the components of Theta, uh, sorted, and- so first of all, we take the components of Theta, we take the absolute value, and then we plot the sorted absolute values. And you can see that for the Tikhonov regularization, that all 200 of the components of Theta are non-zero. And as we go down the list, they decay, but rather slowly. Every one of those x's is being used in some way or another to predict y. For lasso at this particular optimal Lambda, only the first 35 components of Theta are non-zero, the other 165 components are exactly zero. And so we have a predictor which doesn't use in any way 165 components of x which are the irrelevant components of x. Now, after we've trained with lasso, we can pick a particular Lambda where we've only got a few components left which are non-zero so we've done feature selection to select the features that are most important. So here, we've looked at the Lambda where we've reduced the predictor to only using seven features. And if we look back at our plot, that is somewhere here, [NOISE] there's some cut off Lambda value at which there's only seven non-zero components of Theta left. And then what we can do is we can be trained, we can take all of our x's, cut off all of the components of x for which the corresponding Theta that we have has a zero entry. And as a result, our x's will reduce from being 200 dimensional to being only 7-dimensional, in this case. And then with a 7-dimensional x, we can retrain using either Tikhonov regression or lasso regression. And what do we see? We see- on the left here, we see ridge or Tikhonov, on the right here, we see lasso. Um, we see on the left for the plots of Theta that we're seeing the usual shrinkage as we increase Lambda. And on the right, we're seeing lasso style shrinkage where the components are going down to touch zero exactly. However, the key thing about these two plots is that if we look at the test loss. In both plots, were getting test loss which is similar, and that's because feature selection has happened. And what we have now is, um, only those features, the components of x, which are relevant to the problem we're trying to solve which are relevant to y. And as a result, lasso has no components to remove which won't affect the test loss, uh, in a significant way. And, uh, conversely, ridge regression has no extra components to use which it can come to - which it would normally use to overfit. Nounw, this sparsification property of, um, the 1-norm, uh, can be strengthened. So remember what these regularizers look like. If we plot them, then- so this will be a component of Theta and this will be the absolute value. Um, or we can look at the component of Theta and look at the square. Now, if we want to make this a sharper corner here, then we would have a stronger sparsifier. One way we might make it a sharper corner is to replace the penalty function with a penalty function that does this. And as a result, we've got an even greater incentive on regularized empirical risk minimization to make the components of Theta exactly 0. Some people would call this the l one-half regularizer. That is a little bit of a misnomer because the- the- the no- the nomenclature l_p refers to the P norm. And so, um, if I use p as a half, then that- that says I'm using the one-half norm. And it turns out that that quantity, which should be defined as the sum of i of Theta i absolute value to the one-half, all squared, is not actually a norm, it doesn't satisfy the triangular inequality. And so, um, we tend to avoid calling this the l one-half regularizer. Uh, this is the strongest sparsifier. It, uh, it's not convex, however. And as a result, uh, algorithms to compute this may not work as well as the 1-norm or 2-norm regularizers. Here's an example. On the left, we have the plots for ridge regression, in the middle, we have plots for lasso or l_1 regression, and on the right, we have an example computed with the square root regularizer. And we can see that we're getting some very similar behavior for the square root regularizer as we do for the absolute value regularizer where we're getting exact sparsification at some- as we increase Lambda, some of the components of Theta go to 0 exactly. [NOISE] Now let's have one more- look at one more regularizer. This is the non-negative regularizer. Um, so sometimes we know or require that theta_i should be greater than or equal to zero. So when x_i increases, so must our prediction. Um- um, that means that we'd like to impose the constraint, that theta_i is greater than or equal to 0, on the empirical risk minimization. One way to do this is to have a regularizer which charges a cost to the- in the objective function, which is infinite, when theta_i is negative, and is 0 when theta_i is non-negative. So you might have this, for example, if your target variable is lifespan, and x measures healthy behavior. Um, people would call this non-negative least-squares, if you're doing quadratic loss. Um, so one- one way of solving these kinds of problems, uh, is, um, to solve the least squares problem for our theta, and then say, "Okay, well I've got at least a theta which, uh, minimizes the square loss, so it's empirical risk minimization. And I'm gonna take that theta, and any components of it which are negative I'm going to set to 0." We will- we might write that as theta_ls plus. It turns out that doesn't work very well. Because the minimization has been done without any knowledge of the constraint that the components of theta have to be non-negative, and the negative components that we've set to 0 might actually be very important. Um, it's much better to actually impose it as a regularization term on the obje- uh, on the minimization problem and solve the minimization knowing that we would like theta to have non-negative components. Here's a specific example. Here we have, uh, a one-dimensional u and a one-dimensional v. We've set y equal to v, and we've constructed features from u. The features are one, the constant feature, u, and then u minus 0.2 plus. Remember what that is, u plus, u minus 0.2 plus. If I plot that, this will be u, and this is going to be u minus a plus. This is the point a, and this is the function. And so it eng- it- it- the resulting predictor is going to be piecewise linear, and it's going to have kinks in it at these points, 0.2, we've got 0.4, 0.6, 0.8, in this list here. And so we're gonna have a piecewise linear predictor. Now if we want the- the- if we're going to impose the constraint that theta_i be non-negative, well, what does the predictor look like? The predictor looks like y hat is theta_1 plus theta_2 u, plus theta_3, u minus 0.2 plus, plus theta_4, u minus 0.4 plus, and so on. So if the thetas are non-negative, what it means is that theta_1 is non-negative, so at zero the function y hat has to be positive. It means that theta_2 is non-negative, which means that when u is less than 0.2, the function is just theta_1 plus theta_2 u, and so that has to have, uh, a non-negative slope. So if we look at this predictor between the values of u is 0.2 and the val- and u is 0.4, and we find that the slope in that region is theta_2 plus theta_3. Because both theta_2 and theta_3 are non-negative, that slope is greater than or equal to the slope- at- between 0 and 0.2, which is just theta_2. So all of the thetas being non-negative means that the function must be non-decreasing, and it must be convex. As we increase u, the slope has to increase. Um, so here's a dataset, and here's the predictor. Here's the predictor. It's- this is the optimal non-negative least squares fit where we've used a non-negative regularizer, and, uh, um, and we can see that the function indeed is convex and non-decreasing. This is the optimal least squares fit. So any piecewise linear has to be- means convex, but it's not non-decreasing. Now if we use our heuristic and we say, "Well, look, what we're going to do is we're going to take that theta and simply adjust it so that all of the components of theta are non- are non-negative. In particular, here we see that theta_2 is less than 0, and so we change this function to make- by making theta_2 equal to 0. Then we end up with a function that looks like this, and that's, uh- it's an extremely poor fit to the data. We can see here that the non-negative least squares loss is 0.59. The least squares loss of course does better because it doesn't have the constraint that the coefficients be non-negative, and so it does better at 0.3. But the heuristic loss is 15 because the da- the, uh- this, uh, approach, uh, this heuristic, is not guaranteed to work. So the message here is that simply taking the least squares predictor and truncating it can perform very badly indeed. Much better to use a non-negative regularizer. Let's summarize. You want to choose a regularizer, ultimately, the thing you have to do is use validation. If you've got a choice between two different regularizers that you have in mind, say l_1 and l_2, try them both and see one- see which one gives you the best test loss. Um, and the way we do that, well, we know- is that- we choose a range of lambdas, we do RERM, regularized empirical risk minimization, to each one, to compute a predictor, and then we take those predictors, we evaluate them on the test set, and then we choose the lambda that gives the best test error. That tells you the vari- the variation of the test performance as a function of lambda. Now we're gonna do that with each of our different regularizers, and we'll use the regularizer that gives the best test error. Uh, beyond that, we've talked a little bit about some situations in which one might expect lasso to perform better than ridge regression, and in particular, the situation that's most common is when there are irrelevant features in your data. We've seen also that the importance of getting rid of those features can be very significant. If it costs you a lot to collect data, then getting rid of some unwanted data can be a really useful thing to do to save you having to collect the data for future tests. Um, ultimately, though, it's important not to believe that one can a priori determine which regularizer to use. You have to determine it based on validation. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Intro_to_Machine_Learning_2020_Lecture_16_probabilistic_classification.txt | Hello, welcome to the section on probabilistic classification. So- so far we've talked about classifiers that given a record, u return for us a prediction of which class the target variable lies in. And now we want to talk in this section about an extension of that idea. Instead of having a classifier that simply, uh, predicts one value. We want to think about a classifier that can make more general types of predictions. So if we have a, uh, classifier that predicts one value, let's say call it v-hat, um, that would be called a- a point classifier or a point predictor, it makes just one guess. Um, a classifier that produces more than a single guess, might, for example, produce a list of guesses or an ordered list of guesses. Um, so it says here are the top three things, so for example, we've got a camera, it's sitting on a car, it's looking at all the things out on the street, and it says, well, you know, that could be a pedestrian or it could be a person riding a scooter, or it could be a, uh, a person, uh, ah, who is, uh, riding a bicycle. And those would be the- the best guess is that the predictor makes it knows it's not a car, but it- it can't determine exactly which one of those three things it is. And it may rank those guesses. Now it says most likely to be a, uh, uh pedestrian, second choice is scooters, third choice is bicyclist. Um, another, uh, possible output of a predictor or a classifier would be a probability distribution on possible targets. So with probability 0.9, it's a pedestrian. With probability 0.05, somebody on a scooter, with probability 0.05. it's, uh, somebody on a bicycle. Um, and, um, uh, these are very common. Um, in fact, probabilistic classification is probably the most common type of classifier that people use today. Um, uh, and we also of course, are quite familiar with the idea of probabilistic, uh, predictions from weather forecasts. Uh, if you look at your weather app on your phone, uh, it will tell you the chance of rain today is 10%, uh, for example. So let us consider a list classifier. Um, so this is a classifier that produces an ordered list, uh, we might call these v_top, v_2nd, and v_3rd as its best, um, first, second, and third best, uh, guesses for what the true y should be, what the true v should be. And of course, it may produce three guesses. It may produce a top 10, or it could be a variable number. Sometimes it will know, with a- with this very strong certainty that it's, uh, pedestrian and others, it may have more uncertainty, and so it produces two or three or ten guesses. And so, uh, some predictors do produce a variable number of guesses and some predictors always produce a fixed number. Um, and so if we're trying to, uh, uh, uh, evaluate such a predictor, well, we'd be obviously happy as if the true v is the- the first choice of the predictor, um, and a little bit less happy if it's the second choice, and so on. Um, one place you very often see, um, list predictors is in a recommendation system. So, um, Amazon has a predictor, for example, that looks at, uh, you know, your recent browsing history and looks at your recent purchases, and tries to figure out what you're most likely to be interested in for your next purchase and show you those things, and so it will figure out your top 10 list of things you're most likely to buy or most likely be interested in and show you those. Uh, one way to- to make a list classifier, um, is to, uh, use, uh, on embedding that we've seen so far. So for example, we've used nearest neighbor on embedding to construct a classifier, we have a prediction algorithm that produces a y-hat, we have a bunch of different representatives embedded in R_m Psi_1 through Psi_k, and our predictor so far has always just returned for us the representative which is closest to y-hat, but it could equally, well, the return for us, the three closest representatives to y-hat, and those would be the three, um, our best predictions, and the closest one would be the top guess, the second closest one would be the second guess, and so on. So we're going to spend some time in the next few sections talking about probabilistic methods, and so we really need to start looking at probability in earnest. So this is where the, uh, co-requisite in this class of knowing about probability is going to start to come into play. And we're not going to use a great deal of probability, but we will certainly use some. And so just a brief review, right here or probability, uh, probability distribution on our target set V, it's a function. Uh, for each element of our target set, it gives us a real number, and those real numbers have to be non-negative and add up to one, and those are the probabilities associated with those elements of the target set V. So p of v is the probability of the value V. Um, so for example, if the target set is rain and shine, we might have p of rain is 0.15 and p of shine is 0.85. If our predictor is predicting a probability that it will rain of 15% and the probability that it will not rain of 85%. Um, if we have, uh, uh, a probability distribution like this, well, we have, uh, numbered the entries in the target set V as v_1 through v_K, K categories or K classes, or will then we've also, I've got, uh, p v_i, which is the probability of v_i we can think about as the ith entry in a probability distribution vector. And so our probability distribution assigns a number to each of those K classes, and so we've actually got K numbers, which form give us a K-dimensional vector. And so in vector notation, we might say that p is greater than or equal to 0, and what that means is that, that inequality should be interpreted element-wise, all of the entries p_i of p as a vector are non-negative, and the fact that the- because this is a probability distribution, the entries have to sum up to 1. We can express that as 1 transpose p is equal to 1, where the one on the left there is a vector of all 1s, and so that's simply saying that the entries have to sum up to 1. So we can either think about our probability distributions as functions which map script v to the reals, or we can think about them as K-dimensional vectors. Both of those are convenient, um, and we will make use of both choices of notation. A probabilistic classifier produces a probability distribution p-hat on V, given u. So we have a classifier, it takes in the input u, and it returns for us a prediction, and we see that prediction being y-hat, um, embedded value of p, or instead of it being V-hat, a prediction of which class the target variable lies in. It produces as p-hat, and so the hat here indicates prediction. P-hat is a probability distribution on V. So it's a number associated with each of the different classes, which I think- which, uh, informs us how likely the classifier thinks each one of the different classes is to correspond to that particular u. Otherwise, this is p-hat is G of u. This is capital G, so the capital G is used in the same way we used it before, in that we use letter G to indicate, uh, predictor that's taking input x and producing an output y, and a big G to use- to indicate, uh, a predictor that's un-embedded, that's composed with the embedding and un-embedding operations to, uh, take an input u and produce an output V. And here, because we're dealing with probabilistic classification, we're using G to indicate a predictor that takes in an input u and returns for us an output, which is a probability distribution on V. So when you read this, you should keep in mind that we've got something a little tricky going on here. G is a function and it's returning a distribution which is itself a function. So we've a function that returns a function. We can call the function. So we get p-hat is g of u, and then I can take p-hat and evaluate it at a particular class VI. And I can take p-hat of VI. And that will give us the probability that V is VI when the independent variable is U. Of course, I could just write that all once. So I could say G of u evaluated at VI, which I write as G parentheses, u parentheses VI. Um, and so here, um, of course this is totally fine. Uh, we're used to, uh, uh, functions returning concrete objects such as numbers and vectors and matrices, and here we've got a function that's returning a more abstract object, which is a function. Um, and, uh, modern programming languages such as Julia can happily do this. Uh, we've seen this already when we pass around loss functions in Julia. And so here, we're going to pass around probability distributions which are just functions. Um, now in some sense, the point classifier and probabilistic classifiers are related to each other. Um, we can say, well, for example, a point classifier as a special case of a probabilistic classifier. It's a probabilistic classifier that's saying the probability of a particular target is 100%, and the probability of all the other classes is 0. So we could construct such a thing, we would have p-hat of V being 1. If V is p-hat and 0 otherwise. Um, and that would be a distribution that corresponds to 100% certainty that the guess is V-hat and 0% certainty, for the others, 0% probability for the others. Um, and typically this is not what you want, of course. Yes, it's a probabilistic classify. You've taken a point classifier and constructed a probabilistic classifier but you haven't gotten any more information out of this. You can also go the other way. Um, you can, if you've got a probabilistic classifier, you can construct a point classifier. So what you do is you say, well, my probabilistic classifier gave me a probability distribution, p-hat. Let me look, ah, at the roles of the V's for the one that's most likely the one which maximizes p-hat of V. And that would be a way of translating a probabilistic prediction into a deterministic prediction, into a very specific prediction which is actionable. Um, ah, so for example, if you, ah, at a predictor of the weather and it's turning into the probability of rain. Some point you have to make that actionable. You have to decide am I going to carry an umbrella outside or not? And so one way to do that, it would be to say, is it more likely that it's going to rain or that it's going to shine, and it's not going to rain. And if it's more likely that it's going to rain, then you carry an umbrella. If it's more likely that it's going to be sunny, then you don't carry your umbrella. And of course, there are many ways of translating probabili- probability distributions into actions. And this is just one way of doing it. This particular method of translating a probabilistic prediction into a point prediction is called maximum likelihood. We look for the class that's most likely that it has highest probability. We can also generate a list classifier from a probabilistic classifier simply by giving the value sorted by a probability. There are several different ways of constructing a probabilistic classifier. Um, all of the ones that we've seen so far for point classifiers can be extended. So one can have a tree based -tree based probabilistic classifier, which is a decision tree, it has nodes labeled by feature and threshold value, and the leaves contain distributions rather than point predictions. You can have a nearest neighbor probabilistic classifier. And the way that works is then you have, uh, uh, predicted. Uh, if you have a query point x, then you look for the k nearest neighbors 2x. So the x sub i's in your data set which are the closest to x, and then amongst those, you look at the distribution of VIs that you have. So if three of them are true and five of them are false, then one would predict a probability of three-fifths, for the corresponding target variable at the point x. And you can also do probabilistic classifiers which are based on linear predictors or based on neural network predictors, and we will come to those shortly. So here we have a nearest neighbor classifier. Um, we've embedded the u's in r2 to give us two-dimensional xs. Given a query point, uh, we would look at the corresponding point in the plane. And we would like to predict either red or blue for the corresponding class. The way we do it is here, we look at the k nearest neighbors of a query point. So here k is being chosen to be 8. So for any given query point we look at the eight nearest neighbors. Let's look at an example. Um, so we'll have a query point, [NOISE] say right here. And we'll draw a circle that contains eight neighbors, which I would guess is something like that, and then of those eight neighbors, we will count how many of them are red and how many of them are blue, and that will give us the prediction probability distribution, the empirical distribution that we're seeing amongst the nearest eight neighbors. And so here the colors in this plot are chosen accordingly. So if the prediction is blue, 100%, red, 0%, then that's this color which is dark blue. Um, if the prediction is red, 100%, blue, 0%, that's this orange color, and of course, um, the colors in between vary. And maybe we're only predicting one probability here because once we put our prediction probability for blue, the prediction probability for red is 1 minus the probability for blue. So now we want to consider, well, suppose we've got a probabilistic classifier or a list classifier, how are we going to evaluate how good they are? Um, and there are several different ways of doing this. It's not quite as simple as it is for a point classifier. Uh, if we try to judge a list classifier, uh, then uh, we may look at the error rate as a function of the list rank. So, for example, we look- we have a test data se- set, we have a true value of v and we have our predictor is giving us- is giving us, uh, v_top v_2nd and v_3rd as a- as its guesses. And- and then we ask ourselves, well, then how many of our- o- of the te- how many points in the test set did the predictor get v_top to be the actual true v? And we might find that's 68% of the samples. The v_top was actually the true v. And then we could say, well, how many of the test set points was the true value among the top two guesses? And that might be- has to be higher than 68%, that might be 79%, and amongst the top three guesses it might be 85%. And so, if you want just one number, you might decide, well, I'm really concerned that it be in the top 3. As long as it's in the top 3 I'm happy, and, uh, in which case 85% might be, uh, reasonable accuracy. Another way to do it might be to score the test data set. So for every, uh, data point v_i in the test data set, we look whether it was the top guess, in which case it gets three points. The second best guess in which case it gets two points, also a best guess in which case it gets one point. And then we add up the scores over the entire test set, and that's a measure of how good our job, our list classifier there. Now to judge our probabilistic classifier, that's a- a reasonably subtle idea. Um, what do we have? Well, we have a data set with u's and v's in it, and then for each one of those data points, we have a prediction p-hat. And so, um, uh, so for each data point, uh, we have a true value, the v, and we have a prediction which is a probability distribution p-hat over the set of possible v's. And one way to say whether this is a- a good prediction or not would be to say, "Let's look at the p-hat that we got and see whether it gives a high probability to the actual value that happened, v." So, for example, if we've got a- a prediction of rain and, um, we look back at historical data and we look at the predictions of rain based on the weather the day before and we know from whether it did rain or not, well, what we'd like to see is that on days when it rained, the probability predicted for rain was large, and on days when it didn't rain, we'd like the probability predicted for rain to be small. And if we see something like that, we might say, "Well, that's a- a good predictor." Now we go to make this idea formal. Um, uh, the most common way of formalizing this idea is to use what's called a log-likelihood. So we have a data set u_1 to u_n, v_1 to v_n. And, um, we're going to have a prediction at each of those u_i's, which we'll call p-hat_i. It's the output of our predictor when we feed u_i in. And it's a probability distribution over the v's. Now, if this were the true distribution of- so p-hat_i was the true probability distribution that the v_i's- that v_i was generated according to, then we could ask ourselves, what is the probability of seeing that sequence v_1 through v_n? So if i is 1, the probability of getting the particular v that we got is given by evaluating p-hat_1 at that particular v, which is just v_1. Similarly for v- for i is equal to 2, we have to evaluate p-hat_2 at v_2. And those give us the probabilities of getting those outcomes under that distribution. If we make the probabilistic assumption that each of those data points is independent, then the pro- probability of getting the entire data set is the product of the individual probabilities. And so the probability of seeing that entire data set under those distributions, p-hat _1 through p-hat_n is simply the product from i is 1 to n of p-hat_i evaluated at v_i. So this is just a probability. It says well, you've got some distribution, and what's the probability of getting that particular outcome? And we can look at it as, well, just the probability of getting the outcome, but we can also look at it another way and look at it as a measure of the distributions p-hat_1 through p-hat_n. And when we do that, instead of calling it the probability of getting v's, we call it the likelihood of p-hats being correct. So the likelihood is simply another name for the probability, but we call it the probability when we're looking at the probability of outcomes, and we look- call it the likelihood when we're talking about the likelihood of probability distributions. And so we'd like this quantity, the probability or the likelihood to be large. And this is a fundamental measure of how well the predicted distribution matches the data. Just as in our rain versus sun example, if the probability of rain is large on days when it did rain, that's a good predictor. So we can compare two different classifiers by looking at their associated likelihoods, which one has the largest probability value, their- the- the largest likelihood. The one with the largest likelihood we would consider the better classifier. Now it's actually more convenient to work with log probabilities rather than purely probabilities. Now one reason for that is that the likelihood is the product and when we take the log, we're going to get a sum. Uh, there are some other reasons for that as well, which we will see. Um, so what we work with is actually the negative log-likelihood. That's simply the negative log of the probability of getting v_1 through v_n under our predicted distributions p-hat_1 through p-hat_n. That's the negative log of the product of the p-hat_i's evaluated to the v_ i's, which is negative of the sum of the logs of the p_i's to v_i's. Now, the negative log-likelihood is actually a positive quantity, um, and we would like it to be small, eh, if the negative log is small that means the probabi- probability itself is large. Um, one other things about this is that this is- um, if we just look at the log of the probabilities, then that gets smaller the more data points we have. And so we need to normalize in some way, and the way we do that is we look at the average negative log-likelihood. So that's here, L is minus 1 on n. The sum from i is 1 to n of the log of p-hat_i of v_i. And this is a quantity that we can compare, um, the effectiveness of different classifiers on different size data sets. So now we have uh, a very nice performance metric, which is this average negative log likelihood. Now just as we did when we were looking at loss functions, when we were looking at the square loss and the absolute loss, we asked ourselves the question, if we just looked at constant classifiers, which would be the best constant classifier, which minimizes the average value of those losses? Here we have a particular performance metric, the average negative log likelihood. And we can ask ourselves the question, which is the best constant predictor, the best constant probabilistic classifier that minimizes that particular performance metric? So how is this going to work? Well, we're going to have uh, a dataset which is just V1 through VN, a bunch of classes. And we're going to look for a classifier that doesn't depend on the use, so the use may not even exist. All we're going to do is try to predict to a distribution P-hat. Um, and that distribution P-hat has to be a probability distribution on script V. So what do we do? We're going to choose script- we're going to choose P-hat to minimize the average negative log likelihood minus 1 on N. The sum from I is 1 to N of log of P-hat of VI. And we can choose any distribution P-hat we want. Of course it has to be a probability distribution. So it has to be non-negative at every point V, and it has to sum to one. Now, it turns out that the optimal constant probabilistic classifier is very nice, it's a very sensible quantity, is actually the empirical distribution of the data. So the empirical distribution of the data, we simply count up for each V in script V, the fraction of the data which are equal to that particular class. So it's simply the empirical distribution. It's simply the counts divided by the total number of data elements we have. We'll call that Q. And then the P-hat that minimizes the average negative log likelihood is Q. And this is- this is very nice. This is simp- this is just like when we talked about minimizing the square loss. We found that the best constant predictor is the mean. And we talked about minimizing the absolute loss. We found the best constant predictor is the median. Now, we're predicting probabilities and when we think about minimizing the average negative log likelihood, the best constant predictor is the empirical distribution. Now, the negative log likelihood can be computed in a particular way. So we have the negative log likelihood is minus one on N. The sum from I is one to N of the log of P-hat VI. Now, each of the VI's lives in one of the classes. Now, remember what the classes are. We use this notation, V is V1 up to VK. Notice those are subscripts to indicate the classes. And we've got the data elements which are the superscript I from I is one up to N. Now, if I compute this quantity, the sum from I is 1 up to N of the log of P-hat of VI. Well, I can split that sum up. I can split it up into those I's for which VI is say V1. And then I can take the log of P-hat of VI, but all of those VI's of V1, so I can just write that V1, and then I can do the next category, which is actually VI is V2 of the log of P-hat of V2 and so on, up to K categories. Sum from uh, FI such that VI is VK of the log of P-hat of VK. So by splitting up like this- I just split them up into- into uh, categories. Notice that this in- inside here- inside each of these sums, the- the terms don't depend on I. So I've really got just a certain number of terms that are all the same and the number that I've got, well, it just the- it's just the number of data points for which VI is V1 in this sum, the number of data points which VI is V2 in this sum, and so on. So when I work out this sum and taking into account the factor of N, well, then the fraction of terms is QVJ. Those are then- those are- that's the fraction of the terms in this sum for which VI is equal to category VJ. And for those- each one of those terms, the quantity log P-hat of VI, is just log P-hat of the corresponding class VJ. So I end up with the sum over the categories rather than a sum over the data points. And I have that the average neg log- negative log likelihood is the sum over the categories of the empirical distribution of each of the categories times the log of the probability of that category. This quantity is called the cross entropy, of P-hat relative to Q, where P-hat is the distribution that is being evaluated. It's the distribution that's produced by the predictor. And Q is the underlying true empirical distribution of the V's. Uh, we might write this as H. Oops. [NOISE] H of q, p hat is equal to minus the sum from j is 1 up to k of q_j log p hat_j using our vector notation. Uh, there's also a- a- a related quantity when p hat and q are the same distribution. So we'll call them both p, that quantity is called h of p. That's the sum over k of p_k log p_k negative p_k log p_k. That's called the entropy of a probability distribution. And both of these quantities, the cross entropy and the entropy are- um, [NOISE] are very important, mathematically, very important practically, um, they're used, uh, very widely in coding theory and in information theory, and in machine learning. Um, the fundamental properties of probability distributions. We don't need much of the theory of that here or in fact any of the theory of it here, we just want to be able to determine which is the p hat that minimizes the average negative log likelihood. Another way to say it is, if I've got a q, which p hat minimizes H of q, p hat because H of q, p hat, the cross entropy, is the average negative log likelihood. [NOISE] Uh, there's a convenient quantity to use for this, and that's called the Kullback-Leibler divergence of q and p. So we've got H of q, p. We'd like to minimize that over p. But actually, instead of looking at H of q, p, we're going to look at this thing called the Kullback-Leibler divergence, d_KL of q_p, and that's H of q, p minus H of q minus the entropy of Q. This is convenient for, uh, the following reason. We want to minimize H of q, p over p. Um, but actually, well, H of q, p is just d_KL of q, p plus H of q. If I'm minimizing this quantity over p, well, this a- additional term here doesn't matter because it doesn't depend on p. It just depends on q, it's just a constant. So we're just shifting the objective function in our optimization problem. Um, this quantity, the Kullback-Leibler divergence, uh, has a very nice property that it's non-negative. One way to think about it is that it's a measure of how similar two distributions- probability distributions p and q are. And when they are the same, well, then d q, p is 0. Because if we look back in our definition of entropy, H of p is the cross entropy of p with itself and so when q is equal to p, we clearly that have the Kullback-Leibler divergence of q and p is 0. And when q and p are not equal, well then the Kullback-Leibler divergence is always non-negative. Er, this is, uh, easy to see, uh, if I just look at what the sum is, d_KL of q_p is the sum over j of q_j log p_j on q_j. So here we're using our vector notation, p_j is simply a shorthand for p of v_j. Um, and so here I've used the fact that I've subtracted one log from another and that's simply the log of the ratio. Um, now the log of p_j over q_j, uh, has a nice bound. Now the log of the quantity log, this is one, and then the logarithm looks like that. And, uh, I can compare that with, uh, the function x minus 1. Function x minus 1 is the tangent right there, and so the log of x is less than or equal to x minus 1. This implies that this sum is less than or equal to- is greater than or equal to this sum. And this sum we can expand as being equal to the sum over j of p_j, um, minus the sum over j of q_j. And both of those terms are equal to 1 and so because both p and q are distributions, and so the overall sum adds up to 0. Hence the Kullback-Leibler divergence of p and q is greater than or equal to 0. Here we used the fact that q was non-zero and when we did this division. Um, uh, but, uh, in fact this proof can be shown to hold even if some of the q's are equal to 0. And that tells us the best constant predictor. Um, the best, uh, constant predictor is the- given by the p hat that minimizes the average negative log likelihood. The average negative log likelihood is equal to the cross entropy and so we would like to minimize H of q_p hat with respect to p hat. Uh, we can, uh, make this term 0 by setting p hat equal to q. And of course this is just a constant term. And so when p hat is q, um, that's the best constant predictor, best constant predictor has a probability distribution e- equal to the empirical distribution. The resulting, uh, cross entropy is the entropy of q. So let's summarize. Uh, a point classifier makes a single guess of v given u, whereas a probabilistic classifier guesses a probability distribution on the target set- on the target set of classes v given to u. And we judge a probabilistic classifier by its average log likelihood on test data. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_15_multiclass_classification.txt | Hello, welcome to the section on multi-class classification. So remember the key ideas here, we're trying to do classification where the target variable can take one of K possible values. And we have, uh, raw target values, which are say, numbered 1 through K. But we embed those, uh, using the map y is equal to Psi of b. And as a result, y can take one of K values, Psi 1 through Psi K. Each one of those is a vector in R_m. We get to choose what m is. We get to choose what those Psi 1 through Psi K's are. And once we've done, once we've got a predictor that produces a y from an x, we have to get back to v. And the way we do that is we look at the the y hat that we've got produced by the predictor and we look over all of the different Psi's, Psi 1 through Psi K to find the one that's closest to the y hat. And that's going to tell us which of the classes to unembed and give us the resulting v hat. And of course, the predictor that produces y hat from x is constructed via regularized empirical risk minimization. And at the end of the day, what we do is we validate using the Neyman-Pearson performance metric on the test data. So we've picked our Kappas where Kappa j is a distaste, uh, displeasure in mistaking V_j. And, uh, performance metric is then the sum over j of Kappa j E_j, where E_j counts the number of times that we mistook V_j. And of course if all the Kappas are one, that's just the error rate. Now we're going to want to construct a loss function. And it's not quite so easy as it was in the Boolean case. So in the Boolean case, remember how it worked. Let me just use this as a little sketch pad here. Um, we embedded- so this is R m, where m is 1. So we embedded at plus 1 and minus 1. And we'll call this Psi 1 and Psi 2 and then we had ideal loss functions from the Neyman-Pearson loss. So if the true y was- y was, uh, 1, um, then, uh, our ideal loss function would be this. Our ideal loss function would be this. And it would say that since y was 1, our ideal y hat should be anything negative. So we want to generalize this idea to the case where we've now got multiple Psi I's not just two, and we've embedded them in our m and m is not just 1. So the first idea we need is simply, uh, to, uh, know when a vector is closer to one point than another point. And that's-, er, so here we have two points. Uh, we have point a and point b and we have here this green line and this green line is the perpendicular bisector of those two points. And that comes about mathematically in the following way. If I ask when is a vector y hat closer to a than it is to b, well, the distance, um, from y hat to a is just the norm of y hat minus a, and the distance from y to b is the norm of y hat minus b. And so we can ask ourselves the question, when is one close or less than the other? That's this inequality right here. Now we square both sides, so those are both two norms. What happens is we get an expression which looks like y hat minus a norm squared is less than or equal to the norm of y hat minus b squared. And if we expand their squares, well, then we get that the y hat transpose y hat minus 2 a transpose y hat, plus a transpose a less than or equal to y hat transpose y hat minus 2 b transpose y hat plus b transpose b. And the nice thing about this is that those quadratic terms in y hat go away. And this inequality reduces to this inequality. It is this inequality, 2 b minus a transpose y hat, minus norm of b squared plus the norm of a squared is less than 0. And in particular, the thing to notice here is that this only depends on y hat in this nice simple way here. All right. It's a vector transpose times y hat. And so this is equal to 0 along this green line, it's less than 0 when, uh, y hat is on the a side and it's greater than 0 when y hat is on the b side of the green line. And the decision boundary, especially just replacing the inequality with an equality is right here. That's a hyperplane, that's an equation of the form c transpose y is equal to d, where c is a vector in R_m and d is a scalar. And here in particular, c is 2 b minus a and d is the norm of a squared minus the norm of b squared. Um, and this hyperplane has to pass through the midpoint of a and b. And you can check that it does by substituting y hat is a plus b on 2, into this equation right here, and checking that you get 0. Now, we can use this analysis to construct a notion of signed distance from the hyperplane h. So the hyperplane h is here. It's this green hyperplane that's the perpendicular bisector between a and b. And we're gonna say that we're gonna measure the distance from h. And so, I'm going to have some points on this side and some points on this side. And these points, we're going to say a distance minus 1 from the hyperplane h, and these points will say a distance 1. So instead of labeling them both as having distance 1, the ones on one side will get negative distance and the ones on the other will get positive distance. And our point y hat here has some distance here which is about minus 2 from the hyperplane. I might have another point, say over here. And that would have a distance, uh, plus 2, or maybe slightly more than 2 the way I've drawn it from the hyperplane. And where does this formula come from? Well, we have this nice formula here, which is, uh, this inequality here, which is- uh, and it- uh, defines for us which side of the hyperplane we're on and it has the form of c transpose y plus d. Now, over here, we've simply taken that and taken that the value of the left-hand side of that inequality and made it equal to our distance measure. But we've scaled it. We've scaled it by 2 is the norm of b minus a. And as a result, d has this form, d is equal to u transpose y plus v, where u here is a unit vector. It's b minus a divided by the norm of b minus a. And because u is a unit vector, it's measuring distance, uh, which is corresponding to the unit distances that we use in the plane. So in other words, if y hat actually is a distance d from the hyperplane, then, uh, D here will be the appropriate sign appended on to y hat. So D is zero, that's the decision boundary. That's the decision boundary that decides whether we are on the a side or the b side of the hyperplane. And, uh, if it's negative, we're closer to a than we are to b. So why do we want a notion of signed distance? Um, and the idea goes like this. Suppose we have- let me draw a picture. Uh, so this will be R2, so here M is 2. So we've embedded our categorical variables in the plane R2. And the embedding points, well, let's just pick them. Uh, let's put, uh, one over here, one over here, and one over here. Uh, and I- I will label those, uh, Psi 1, Psi 2, and Psi 3. And those are the points to which our categorical target variables embed. Now, with those points, we're going to, um, say, okay, well, those are embedded points, we'd like to define a loss function with those points. That's what we'd like to do. Now, the idea is, of course, is that we're going to have a predictor that produces for us a y-hat. And we would like that predictor to be inclined to give us a y-hat that's close to Psi 1 when y is 1, and a y-hat that's close to Psi 2 when y is 2, and a y-hat that's close to Psi 3 when y is 3. So let's divide up the plane into regions, the Voronoi partition of the plane, like that. And then this region right here, this region here is the region, uh, which, if we have a y-hat within that region, then it's unembedded back to Psi 1, and, therefore, back to the target value corresponding to Psi 1. And so what we'd like is we'd like to have a loss function which encourages y-hat to lie within that region when y is equal to Psi 1. We can imagine such a loss function. Here's what it would be. It would be zero if y-hat lies in that region and one elsewhere. It would be 1 here and 0 here. And such a loss function would count up for us the rate at which we mistook y1 for something else. And so, in our loss function, if we summed up that and divide it by n, that would give us precisely the rate which we mistook for- y1 for something else. And so Kappa 1 multiplied by that loss function would count up for us the Neyman-Pearson contribution of the loss for the target Psi 1. And then we would do another one. We'd have a loss function. Of course, we've got a different loss function for y1, y2, and y3. And so, if I look- if this is y1, this is y2, this is y3, I would have a loss function which was 0 here and 1 here. And that would be the loss function corresponding to y is equal to 2. And so we have three different loss functions, functions of y-hat; one when y is Psi 1, one when y is Psi 2, and one is- when y is Psi 3, and that each one is 0 on the region corresponding to its Psi i. So how do we construct such a loss? Well, what we can do is we can use the signed distance functions D_ij. And what D_ij is going to be, it's gonna be the distance, the signed distance of y-hat from the boundary between Psi i and Psi j. Let's look back at our example. This is Psi 1, Psi 2, and Psi 3. Now, uh, D_1, 2 is less than 0 when we are closer to Psi 1 than we are to Psi 2. So D_1, 2 is less than 0. Where is that? That's everywhere to the left of this line. Where is D_1, 3 less than 0? That's everywhere below that line. Let me mark my two lines. And so if D_1, 2 and D_1, 3 are less than 0, well, then I'm in the region belonging to Psi 1. And similarly, I'm in the region belonging to Psi 2, that's this region. If D- if D_2, 1 is less than 0, then D_2, 3 is less than 0. And finally, over here I have D_3, 1 is less than 0 and D_3, 2 is less than 0. So in other words, I need, for any given region, for any given i, if I want y-hat to be in the region belonging to Psi i, then I need D_ij to be less than 0 for all the other js. Now that we've got these signed distance functions, we can use them to construct our ideal loss functions. Here's some examples to be explicit in the most common cases. The most common cases are, of course, the Boolean. In the Boolean case, M is 1, and so I've just got R here. Psi 1 is minus 1 and Psi 2 is 1. And then I've got D_1, 2 is less than 0 over here and D_2, 1 is less than 0 over here. And so if y is in the region corresponding to Psi 1, so that means that y is minus 1, well, then we need- we would like D_1, 2 to be less than 0. Um, and that would be- and then we'd have a loss function which was 0 over here, say, and large over here. And that would be our ideal loss function, our Neyman-Pearson loss function. Similarly, if y is plus 1, then, uh, we want D_2, 1- or, uh, if y is plus 1 and that corresponds to Psi 2, and so we want D_2, 1 of y-hat to be less than 0. And D_2, 1 of y-hat is minus y-hat, and so we want y-hat to be greater than 0. If we look at the one-hot case, that's their Psi j- this should say e_j. Let me just correct that, Psi j is e_j. Um, and, uh, if we, uh, compute what D_ij is, well, the distance between D_ij- let's just look at it. This is Psi 1 and this is Psi 2. In two dimensions, if we're embedding using one-hot, this is, of course, 1, 0, and this is 0, 1. And so the distance between those two points is simply root 2. And so D_ij is y_j minus y_i on root 2. Where, in order to compute that, of course, we've used our definition here, 2b minus a transposed y-hat divided by 2 the norm of b minus a. Obviously, the twos cancel and b minus a transpose times y-hat gives us y-hat i minus y-hat j, or y-hat j minus y-hat i, depending on the sign. So when y is e_i, we want to max of all j not equal to i of D_ij to be less than 0. And that means that we want y_j minus y_i to be less than 0. And that's the same as saying that y_j is less than y_i. So in other words, we would like the corresponding, uh, uh, component of y-hat to be the maximum component to y-hat. So let's use these to construct loss functions explicitly. Um, what we gotta do, well, we've got to give K different loss functions, one corresponding to each of the possible values of y. Remember, loss is a function of y-hat and y, and y here can only take K possible values, which we've denoted as l of y-hat comma Psi_i. It's how much we dislike predicting y-hat when y is Psi_i. In other words, if to come back to our example, if D_12 is less than zero and D_13 is less than 0, then we'd like a loss function corresponding to l of y-hat Psi_1 to be small and large when y-hat doesn't satisfy these two conditions. And that gives us immediately the Neyman-Pearson loss. We can set, uh, a loss function of, um, 0 on the region corresponding to Psi_i and Kappa_i elsewhere. And that means when we take the average of that Neyman-Pearson loss, we get Kappa_i times the frequency with which y-hat is mistaken, y-hat gives an answer which is not i when y truly is i. And this is, of course, the analog of our nice square loss functions that we've seen before. Now, the downside is, of course, that it's hard to minimize this- these discontinuous loss functions which have derivative 0 almost everywhere. And so instead, we use what's called a proxy loss, a loss that approximates the Neyman-Pearson loss, but isn't more easily optimized. So we either would like it to be convex, or differentiable, or both. So here's the, perhaps, one of the- there- there are two really common loss functions that are used. One is the hinge loss, this is the multi-class hinge loss. It's Kappa_i times the maximum of 1 plus D_ij of y-hat, and we're maximizing it over all j not equal to i. And this is a loss function which is zero if y-hat is in the right region within margin of at least 1. And so it has to be not only within the right region, but it need- it has to be deep within the right region. And this is a convex but not differentiable loss. If you use quadratic regularization, then it's multi-class support vector machine. If we'd have a Boolean embedding, then Psi_1 is minus 1 and Psi_2 is 1, then it reduces to the usual hinge loss that we've seen before, is 1 plus y-hat, the positive part times Kappa_1 when y is minus 1, and it's the flip of that, uh, when y is 1. Here is the corresponding surface plots for it in the case which we've been discussing so far. This is our, um, uh, example, we've got three Psis, Psi_1, Psi_2, and Psi_3, and you can see if you've got a loss function which is convex, which is piecewise linear, and which is 0 when you're deep within each of the three regions. So, of course, you've got three loss functions here. In the top right graph here, we have the loss function corresponding to the case when y is one, er, y is Psi_1. In the bottom right, we have the case where y is Psi_3, and in the bottom left, we have the case when y is Psi_2. Now, if you look in particular, to the top right when y is Psi_1, you can see that the loss function is zero when, uh, we are, uh, deep within the region corresponding to y_1. Here's the other most common loss function, the multi-class logistic loss. So this is, uh, the following. This is the log of the sum of the D_ijs. So if all of the D_ijs are less than 0, then this is going to be, well, the sum of e to the- e to the negative numbers, and so it'll be a- a relatively small quantity, and it will grow as the D_ijs grow. It's convex and differentiable. It's called a multi-class logistic loss, and we use this as our loss function, we're doing multi-class logistic regression. So here's, uh, a plot of that loss function. Again, we can see three different loss functions, each one associated with the different region of the plane corresponding to our different representatives, and for each one, the loss function is small in the corresponding region and grows outside the region. Uh, the log sum x function is what's at the heart of this loss function. Uh, this is an interesting function because it is convex differentiable, and its approximately the maximum function. Sometimes people call it the softmax function, but you should be careful about that because there are other functions also used in machine learning as it happens, they're also called a softmax function. Um, one of the nice thing about the log-sum-exp function is then we know how far it is from the max function. It's always greater- greater than or equal to the maximum of the x_is, and is always less than or equal to the maximum of the x_is plus log n. Uh, let's look at an example. Uh, this is, uh, uh, a three class example. This goes back to 1936, is a very famous dataset which was collected by the statistician Fisher. He took measurements for 150 different plants, all of which were three different species of iris, so 50 from each of the three different species. And for each of those samples, he made four measurements; the sepal length, the sepal width, the petal length, and the petal width. We can plot them here. So this plot here, this is simply an arrangement of all the data. Um, so for example, in this plot on the top right, we've plotted sepal length on the vertical axis against petal width on the horizontal axis. Uh, you can see that there are three species. The widths- there's some- one of the species is indicated in red, one of them in green, and one of them in blue. You can see that telling the red from the others is rather straight forward, and telling the green and blue apart is a bit more difficult. So here, we've done classification with- using only two of the features, sepal length and sepal width, and there's that plotted, uh, in this, um, figure. Again, we can see green dots, blue dots, and red dots, and it's relatively easy to separate the red dots. And this is the- uh, using a linear predictor with one-hot embedding multi-class logistic loss, and we're minimizing the probability of error so that the Kappas are all one. And here, there's no training and test set, we shouldn't be trained in on all the data. Of course, if we had more data, we would, uh, uh, be using a training and test set, and we'd also be using regularization. Uh, the confusion matrix shows us what we can see from the figure, that we've got all the red dots perfectly correct, we haven't mistaken anything for a red dot. Uh, but some of the greens and blues we've got wrong, we've mistaken, uh, 12 greens for blues and 13 blues for greens. And, uh, and as you can see in the- in the figure, it's- there's no clear way to separate the greens from the blues. Now, if you do classification with all four features, of course, we can plot it anymore. Um, same thing, one-hot embedding, minimize probability of error. Well, then you can do much better, then you can get a confusion matrix where there's only one error out of 150 plots. One- uh, sorry. Two errors, two misclassifications, um, we've got one blue classified as green and one green classified as blue. So let's summarize. When we are doing multi-class classification, we'd like a loss function that encourages the correct un-embedding. So when y-hat is close to Psi_i, well, then we want l of y-hat Psi_i to be small, and we want to be not small when y-hat is not close to Psi_i. Uh, the most common losses people use are the multi-class hinge loss and the multi-class logistic loss, and these two classifiers are called the SVM and the logistic classifiers. And both of these classes are- both of these losses are convex. So we can solve easily the ERM or the RERM problems in the case where the predictor is a linear predictor. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_8_non_quadratic_losses.txt | Hello. Welcome to the section on non-quadratic losses. So far, we've talked mainly about regression, that is the case when the target variable is a real number or a real vector. Uh, we've seen several different predictors for such things, including things such as neural networks and simple linear predictors, as well as more complicated things such as trees. And we've seen quadratic losses and a few other example loss functions. And so today, I want to go through a few more different non-quadratic loss functions and talk about how the resulting predictors behave when we use those as losses and empirical risk minimization. So when we're solving the empirical risk minimization problem, the empirical risk, the average loss, is simply 1 on n, the sum from i is 1 up to n or the loss evaluated at y hat i, y_i, where y hat i is the predicted value at position x_i. So y hat i is g Theta of x_i. Now, very often, the loss function, l of y hat y, the intention of it is it- that it's measuring the discrepancy between y hat and y and why- how much is the deviation between y hat and y? And so as a result, it's very common that the, uh, loss function looks like a penalty function, p of y hat minus y. Here, p can take several different forms. Uh, we've seen, for example, the square penalty, r squared, where the resulting loss function is therefore the square of y hat minus y. Here, r, y hat minus y is the prediction error or the residual. Um, if you've got a scalar y, then r is greater than 0 means that our predictor is overestimating, and r is less than 0, it means that our predictor is underestimating. And not all loss functions have this form. We've seen, for example, the, uh, percentage error which had a form which cannot be expressed as the penalty function of y hat minus y. So if we have a penalty function, it tells us about how much we object to different values of prediction error. Um, very often, p of 0 is 0, and p of r is greater than or equal to 0 for every other r. Um, very often, p is symmetric, p of minus r is p of r. In which case, we really are only determining the penalty based on the magnitude or that is the absolute value of the prediction error. Sometimes, p is asymmetric, p of minus r is not p of r. In which case, we get a different amount of penalty depending on whether we're overestimating or underestimating. Uh, let's look at two common penalty functions, here is the square penalty function versus the absolute value penalty function. So when we're using the square penalty, that's p of r is r squared. Well, when the prediction error is very small, well, then the penalty is really very- very small, small squared. Um, when the prediction error is large, well then, the penalty is extremely large, large squared. If you compare this with the absolute penalty, we'll see that the absolute penalty responds less. For small prediction errors, the penalty is larger than the square penalty, and for large prediction errors, the penalty is smaller than the square penalty. Now, here's another one we've seen before when we were looking at quantiles. This is the tilted absolute penalty function, uh, it is parameterized by tau. It's an absolute value but it's tilted. And so when r is negative, the penalty is minus tau times r, and when r is positive, it's 1 minus tau times r. And tau is somewhere between zero and one. When tau is a half, the tilted absolute penalty function isn't tilted, becomes the same as one-half the absolute value of r. So we have the same penalty for underestimating as overestimating. When tau is greater than a half, well then, it's worse to underestimate than to overestimate. Remember, the residual is y hat minus y, so the residual is positive means that y hat is greater than y. And so when tau is greater than a half, well then, our penalty is less for positive r and for negative r. And similarly, when tau is less than a half, it's worse to overestimate than to underestimate. Now, the penalty function expresses how you feel about large and small prediction errors, about positive and negative prediction errors. And as a result, they drive the predictor to prefer certain- making certain types of predictions. Um, in other words, the choice of the penalty function is going to change the shape of the histogram of prediction errors. Remember how we construct a histogram? We look at all the residuals, and we divide them up into bins, and we display this as a bar graph which shows the distribution of residuals. So here's an example. Here, we constructed random data, 300 data records, each one of which has a scalar y and a 30 dimensional view, and then we append a constant 1 so that x stops with 1 and then has 30 random numbers afterwards. And we use a 50/50 test train split here. And, uh, here, the residual is y hat minus y, so the ith residual is Theta transpose x_i minus y_i. Now, we've plotted here the histogram of the residuals for the- the predictor that minimizes the empirical risk. Now, we've done this in two cases. The first case is the top case where we're using a square penalty, and so this curve here is our penalty function. Now, we can say we've got some distribution of residuals, and if we look at that in the test data, we get a fairly similar distribution of residuals as we'd expect. Now, we also try this with the loss function being the tilted penalty of the residual. So now, it's a loss of y hat comma y is the tilted penalty function of y hat minus y. And what do we see- We see that on the right-hand side of this plot, there's many fewer residuals, many fewer data points for which the residual is positive. Then there are data points for which the residual is negative because the penalty is much greater for producing a positive residual and in other words, for overestimating, than for underestimating. And so our predictor-a predictor that is chosen by empirical risk minimization turns out to be a predictor that prefers to underestimate than to overestimate. And that extends also to the test data. As much as we might expect that there's far fewer data points on the right-hand side of the test data, block test residual histogram than there are on the left-hand side. Now I want to switch to another type of error that we might want to shape and that's the case of outliers, and this is called the best-fitting. So, in some applications you have a few data points that are just wrong- that are just way of. Sometimes this occurs because when the data was entered or transcribed, it was entered incorrectly. There's an error in the decimal point position or somebody just made a typo. Other times they're caused by sensor failures or various types of anomalies that occurred when the data was collected. And these uh, points are called outliers. And a consequence of having outliers, even if you've got only a very few of them, is that the ERM can pick a very poor predictor. Um, now there are several methods for removing outliers. Um, ah here's one. Um, create a predictor just based on the entire data set. And now look through your data points and for each data point, compute the residual, compute the prediction error. And those data points that have very large prediction errors, mark them as outliers, and remove them from the data set. And then re-fit and if you've still got poor prediction, so a large empirical risk, go through and do it again. This method can work well. It's not easy to implement because one has to decide what it means to have a large prediction error. And it's certainly possible to choose that poorly and not be aware of that, um. Another way of, uh, removing outliers or at least handling outliers and that is to use a penalty function that's less sensitive to outlier data points. And we're going to look at that in this section. These kind of penalty functions are called robust. The robust penalty function is one that has low sensitivity to outliers. And the idea is- is that the way you make a penalty function robust is you make sure that it grows more slowly for large prediction errors in particular than the square penalty. And so that means that the predictor will not be predisposed to avoiding those large prediction errors, which presumably occur with the outliers. So they respond a bit more gracefully to the outliers. So instead of the error, in the- in other words, instead of the empirical risk being dominated by the loss and a few very large values of y. Because the loss function doesn't grow so rapidly with y hat minus y. Those aren't the values that dominate the empirical risk any more. And we end it with a robust predictor that fits most of the data reasonably well. And the most famous robust penalty function is called a Huber penalty function. This is named after a Swiss statistician named Peter Huber and it looks like this. Um, between, ah, it is a function which is defined in two pieces. Ah, first it's defined for small r and then separately it's defined for large r. And so the transition value will be, um, Alpha. And so for r with absolute value less than Alpha, it's just the quadratic function. It's just r squared. And for r, greater than Alpha, it's a linear function. And the slope of that and the intercept of that linear function I've chosen so that they meet up very nicely with the quadratic at Alpha and here Alpha is 1. And so here we've got a function which behaves exactly like the quadratic. When r is small, when the residual is small, it will penalize the residual exactly the same amount as the quadratic penalty would. But when the residual gets large, it penalizes much less than the quadratic penalty would. And there's an explicit formula here for the Huber penalty. It's r squared when r is small and some linear function of r when r is large. Um, let's look at what it does. So here, what we have is- we have a simple scalar ERM problem. We have a bunch of data points. And what's plotted here is on this horizontal axis we have u and on this vertical axis we have v and then have the embedding that x is equal to 1 and u and y is equal to v. So we're gonna use a linear predictor here. So we're going to use y hat is equal to Theta transpose 1 u. And we're fitting a straight line to a bunch of data. And we have, um, a whole bunch of data points sitting in the middle here. And then we have, out here and out here, some data points that are outliers. And as a result, if we use the square penalty function, so we solve this problem using least squares. We end up with a predictor, which is that straight line there, which as we can see, is doing rather poorly at fitting this data set. And we can think about it as these data points are pulling the line down, and these data points are pulling the line up and we're twisting our predictor. And as a result, even though we got many more data points that are not outliers than data points that are outliers, what matters is the square of the distance between the predicted value and the actual data point. And so it's the square of the distance between a data point down here and the predicted value up here. That distance squared is how much that data point is going to contribute to the empirical risk. These data points contribute those distance squared. And so even though there are many more data points that are true data points, the distance of the outliers from the predictor is so large that by the time you square the magnitude of those corresponding residuals, the resulting terms in the empirical risk and up swamping the true loss terms in the empirical risk. The Huber, if we solve the same problem, but instead of minimizing the empirical risk with a square loss, we minimize the empirical risk with a Huber loss. We end up with a predictor that looks like this. It's still being influenced by the outliers, but as we can see, it's influenced a lot less. And it's doing a reasonable job at the rest of the data as a result. Now you can take this further. You can say, okay, well, I'd like, ah, a penalty function which is even less sensitive to outliers than the Huber penalty. Here's a penalty that, uh, we might call the log Huber. Again, within a particular region here Alpha is 1. So between minus 1 and 1, when the residual is between minus 1 and 1, we got a quadratic function. And when the residual is larger than that in absolute value, we've got a logarithmic function. And we can choose the terms in that logarithmic function such that the- um, it joins up nicely with the quadratic with the slope at that point. Notice that even though it says log y squared there, of course log of y squared is just 2 log y, um. And, um, so we got, ah, something which is quadratic for a small y and logarithmic for large y. And that means that when we compare this with the quadratic loss at large R, we're really discounting a lot, very large residuals. We have a diminishing penalty at large residuals or at large y effectively. And so if we can- if we apply the log Huber loss function, to our same data problem, to our same data feeding problem. We have here a fit which really passe- passes straight through the middle of the true data points, and effectively ignores the outliers. We can see this in terms of the error histograms as well. Here, um, the top row of these- these plots show the results from applying the square loss function. Here is our predictor. We can see we've got a residual of training errors. So these are the training residuals which are spread out between minus 2 and 2. And then we see very similar things in test that we do on train as we should. Uh, in this plot on the left, some of the data points some marked blue and some of them are marked red. The blue ones are the training points, and the red ones are the test points. Here's the Huber. The fit that results from using the Huber loss function. And we can see that what's happened is that the residuals have split up, right? There's a whole bunch of residuals that are now very close to zero. Those are the true data points. And we can really clearly see the outliers. And the same thing we see ever on test. When we apply the log Huber. We can see that the training residuals near zero get even smaller and get even closer to 0. We've only got two bars rather than four bars, um, in the middle, and the same thing in test. So one could from either the Huber or the log Huber, immediately identify the residuals. And we could simply say, these data points here, we're gonna remove from our dataset, and then we're gonna fit again. And of course, when we fit again, having removed those data points and we can use anyone of these loss functions, and it would be fine. It's also important to notice here that, when you look at plots like the ones on the left, it's tempting to think, well, why do we need a machine like this in order to be able to identify outliers? And the answer is, because here x and y are one dimensional. If, uh, x is in a million dimensions and y is in 100 dimensions, well and suddenly we can't make these kind of plots anymore, and we can't see which data points are outliers and which ones are not. Nonetheless, we can still plot the residuals or at least the norm of the residuals and the histograms of the norms of the residuals and be able to identify outliers that way. So the next topic I want to address is quantile regression. The idea here is that we're going to do ERM or Regularized ERM. And we're gonna use a loss function which is generated by the tilted penalty function. And that's called quantile regression. And the intuition is that when Tau is greater than a half, that's gonna make it worse to underestimate, and so we're going to end up with predictions that are high. When Tau is less than a half it makes it worse to overestimate, so we're gonna end with predictions that are low. Now, we can be explicit as to exactly how high or low these predictions are. And the way this works is that we're going to assume the predictor has the form Theta_1 plus g Theta tilde of x. And that means that, say Theta_1 might correspond to the- a linear predictor where x_1 is 1. And so the resulting prediction has a Theta_1 term in it. And we're gonna assume Theta_1 is not regularized. The other components of Theta might be regularized and g tilde of Theta of x, it may not be a linear predictor, it could, it's an arbitrary predictor. In other words, that the regularizer r of Theta does not depend on Theta_1. So we might have ridge regression where we're using the square regularizer. And so r of Theta is Theta_2 squared plus Theta_3 squared all the way up to Theta_d squared. Now, if you do this, then it turns out that on a training set, when you're using the regularized ERM predictor. The 1 minus Tau quantile of residuals is 0. In other words, the fraction of data for which we overestimate is Tau. And that's why it's called quantile regression. If the predictor generalizes and we would expect to see the same thing in the test data that we see in the training data. Notice that in the training data, this is exact, apart from the possibility that there'll not be data points with repeated residual values. We can create predictors for many different Taus, which give many different quantile estimates for a particular x. Let's look at why we get this phenomena. Remember what we saw in this section on constant predictors. We saw that if we minimize- [NOISE] that if we minimize the empirical risk 1 on n, the sum from i is 1 up to n of the tilted loss of Theta minus y_i. That the resulting Theta has the property that the Theta is the Tau quantile of the y_i's. Now, here we're minimizing something different. We're minimizing 1 on n, the sum over i is 1 n up to n of p Tau of g Theta of x_i minus y_i. So that's not quite the same, but let's split it up by using the property of g Theta. And that is the g Theta is Theta_1 plus g tilde Theta of x_i minus y_i. Now we're going to minimize this over Theta, and that means our problem is minimize over Theta L of Theta. Now, Theta has multiple components, Theta_1 through Theta_d. And we can regard this minimization as equivalent to, it is equivalent to, minimum over Theta_1, minimum over the rest of Theta, L of Theta. And once we've done this part of the minimization here, what's left is a minimization that looks exactly like the one we had before. It looks like this, where now we have Theta_1 plus a term g th- Theta tilde x_i minus y_i, which doesn't depend on Theta_1, right? As a result, Theta_1 is going to turn out to be the Tau quantile of y_i minus g Theta tilde of x_i. We can say that another way. The fraction of the data points for which y_i minus g Theta tilde of x_i is less than or equal to 0- is less than or equal to Theta_1, is around Tau. Subtracting Theta_1 from both sides gives us the residual. And so the fraction of i for which the residual is greater than or equal to 0 is around tau. The fraction of data points for which we overestimate is around Tau. Here's an example. Here we have again some random data, and we've solved ERM for, where we're using a loss function, which is a tilted loss function, and we can see exactly what we expect to see. When Tau is 0.1, 90% of the data has a residual which is less than 0. When Tau is 0.9, 10% of the data has a residual which is less than 0. And when Tau is a half, about half of the data is one side to half the data as the other side. Let's look at an example of ERM where we're trying to fit data. Here we have a bunch of data points. Again, similar one-dimensional u, one-dimensional v, and we're going to embed by x is equal to 1u, so we'll have a constant term and, er, a linear term in our predictor. We've got blue points here, which are training points, and red points here, which are test points. Um, and we're going to fit a straight line prediction model using the tilted loss. That's three different values of Tau. And these are the resulting predictors. Uh, this is Tau is 0.1, the predictor for which our prediction at any given value of u, let me pick a value of u, say, I pick point 3, the prediction is somewhere around one, which is clearly above most of the data. The prediction- the predictor prefers to overestimate than to underestimate. When Tau is a half, we get a predictor that's somewhere in the middle, and when Tau is 0.1, we get a predictor that prefers to underestimate than to overestimate. Another way to view this data is to plot the predicted value and the true value for the data points on one plot. So here, we have the true value of v, or of y, and here we have the predicted value v-hat on the vertical axis, so each of these three plots. Now, in the ideal world, v-hat would be equal to v, our predictor would produce perfect predictions, but, of course, we know that's not very likely, particularly with this dataset, and so, um, we see that v-hat and v are spread out. So in the ideal world, we'd have v-hat and v would be a bunch of data points that would live on the diagonal, and that would be the perfect predictor. If you go to predictor that tends to overestimate, well then, the predictor tends to produce v-hats that are greater than v, and that means it tends to produce points that are in this half of the plane. And if you've got a predictor that prefers to underestimate, it tends to produce v-hats less than v, and we end up with points in this half of the plane. And we can, um, look at, er, er, if this is training data, we can look at our data and count and we would find that 90% of the points are above the line here and 90% of the points are below the line over here. And, er, with, er, er, Tau is a half, well then we should see exactly half of the points on one side of the line and half for the points on the other side of the line. So there's another way of plotting this data and that is to look at the cumulative distribution of the residuals. So for any given q, we are going to plot q here on the horizontal axis, and then on the vertical axis, we will plot the number of data points for which the residual is less than q. And in particular, I've got three plots here, one shows the residuals when we've computed the predictor using Tau as 0.1, another when we've computed the predictor using Tau as 0.5, and a third using the- showing the residuals when Tau is 0.9. And if we look at this plot, and we look at q is equal to 0, this- the green predictor with Tau as a half shows us that exactly one-half of the training data points are less than ze- have a residual less than 0. When Tau is 0.1, we see exactly fraction 0.9, so exactly 90% of the data points have a residual less than 0. When Tau is 0.9, we see exactly 10% of the data points have a residual less than 0. And so, here we're saying that the, er, quantile regression is giving us exactly the quantiles we expected it to on the training data. On the test data, let's have a look how well we did. Um, at, er, er, q is 0, er, we're getting almost exactly 0.1. When Tau is 0.9, er, we're getting 0.45, so we're getting 45% of the data have a residual less than 0 when Tau is a half, and we're getting 85% of the data have a residual less than zero when Tau is 0.1. See, even on the training data, we're getting quantiles for the residual that match pretty much what we expected to get. Let's summarize. Loss function is often expressed as a penalty function of the residual. Residual is r equals y-hat minus y. And the loss function expresses how much we object to different values of the residual. Different choices of loss function give us different ERM predictors. And in particular, there are two types that are very important. One is robust fitting. When you're fitting data with outliers, there's a penalty function which increases more slowly than the square penalty function, and that gives you robustness to the outliers. If we're interested in producing predictors that prefer to overestimate or underestimate the data, then you can do quantile rejection- regression. And in quantile regression, you fit data which gives an- in such a way that you get a specific fraction of over-estimates. And you can choose that fraction by choosing Tau. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_2_overview.txt | Hello and welcome to lecture 2, Overview and Examples. So we describe in this section the general ideas and methods used in machine learning at a high level. We're going to go over all of these topics again in much more detail later in the course so really don't worry if some of this seems abstract at this point, or if there are terms that you don't know. We're gonna go over the mathematics of it, we're gonna make precise all of the details, and put it- spell it all out both in mathematics and in code so that you'll be able to know exactly what the, the details are. In this section, we're just gonna describe in words the principles of machine learning, some of the ideas and some of the methods. So at a high level, the main idea of artificial intelligence is that you would like a computer to perform a very complicated task. Something such as medical diagnosis. In here, we can imagine that we have a, a patient who has a medical history; a bunch of symptoms, a bunch of test results, and a bunch of data that, you know, properties of the patient. Their height, their mass, their gender, and so on. And we give all this information to a computer in some format, and we'd like the computer to come back and say what's wrong with the patient is this. Uh, we can think about two approaches by which you might write such a program. One is called the knowledge-based approach. We write a program where we encode in the logic of the program the properties of the world and what they mean. So, for example, we would say something like if the patient is of this age and of this gender, and is a smoker, and has this range of blood pressure then, and so on and so on and so on. These kinds of programs, uh, can work very well. Um, uh, they usually take extensive development by experts in, in this case, medicine, or whatever the domain of expertise, whatever the domain of specialty we're applying the program to. And, uh, and it takes many years of work to achieve a, a working system that has flexibility enough to handle this sort of cases one would like it to. Uh, machine learning is very different in its approach. Instead of humans encoding the properties of the world in the program, in machine learning, we supply the program with historical data. That historical data consists of a large number of patient records. And for each one of those records, we supply all of the medical information, the test results and so on. And we may supply also diagnoses given by a medical professional. From all of that data, the machine learning algorithm develops a predictor. It develops the ability to make predictions. And it uses that predictor to extrapolate so that when it's given a new set of patient data, it can figure out what the diagnosis should be for that new patient. This class is about machine learning. It's about the second of those two types. So when we think about the task of machine learning, we divide up the process into two generic tasks. The first of those tasks is to build a model from the data. We have to take the data and convert the data into a format which is easy for the computer to understand. Uh, some of that data is rather easy to convert. For example, the height is simply a real number. Some of the data may- we may want to do something more sophisticated with it. For example, if we have an X-ray image of a patient's arm, we could simply give the machine learning algorithm the image, the pixel data corresponding to that image, or we could encode that image, describe additional features of that image in an automated way. We could, for example, describe the center of mass- the position of the center of mass of the bone. We could describe, for example, the, uh, length and the width of various parts of the image. There are a whole bunch of things we could describe which we would have to figure out how to. What we would choose that will be informative features of that data. Uh, in this class, there are essentially two things that we will do. We will spend some time discussing how to construct sensible features for many standard classes of data. And then we will switch into a different approach where we discuss how to automatically generate features from data. And it turns out that automatically generating features can be extremely powerful. And that's one of the key benefits of methods such as neural networks. The second part of generating a model is to figure out what class of models we would like to have. There are very simple model forms that you've probably seen before. One such model might be fit a straight line to the data. The sort of thing you would do with linear regression. There we would need to describe the, the slope of the line and the intercept of the line. And that's essentially two numbers used to describe the model. Those will be parameter values that parameterize that model. Alternatively, we might describe the model in a much more complicated way. We might describe the model as a tree. We might describe the model as a neural network with weights and activation functions. These are all different types of models. They all have parameterizations in terms of numbers, and we would specify both the model form and what parameters are necessary. Those three things, constructing, mapping the data to features, choosing the model form, and choosing the parameter values in the model, those are the- those comprise the first task in machine learning, building the model. The second task that comes up in machine learning is testing the model, or validating the model. Proving that it works. The key thing one has to do in order to do that is test the model on data that it didn't learn from when we made the model. We don't want to test the model against data that we've already used to generate the model because that's, uh, cheating. We'll see in more detail exactly how one goes about testing the model to avoid this kind of cheating, and what kind of tests one can do in order to, uh, give an- get an effective analysis of whether or not we've done a good job with our modeling and our prediction. One of the things one might reasonably expect is that the process of building the model and testing the model is iterative. If we test the model and we find out that the model doesn't work, the predictions that it's making are not very good, we need a way to update the model. And that's again something that we're going to talk about later in the class. I've used the term model on this slide, er, extensively and I haven't really said what model is. Uh, model can mean several different things. Uh, one of the things it is is simply, er, it's a predictor. It's a function that takes input data and gives you a predicted output value. Um, it takes the set of symptoms of the patient and gives you a predicted diagnosis. But there are other things that could also be- a model might be a probability distribution. It gives you a distribution of a sets of patient records to describe which ones are likely to occur and which ones are not likely to occur. And we'll see other things that a model can be as well. There's a taxonomy of machine learning models that is- runs throughout the class and is really how most machine learn- machine learning algorithms are, er, categorized. The key categorization is that they're supervised models and there's unsupervised models. So in supervised learning, what we do is we are given some data and uh, we learn from it how to predict something. And this is a- one way to think about this is a prediction model. Within the category of supervised learning, there are two subcategories, and they are regression and classification. The only difference between these two is in regression the quantity that's being predicted is a real number or a real vector. In classification, the quantity that's being predicted lies in a finite set. Might be true or false. It might be 1 of 10 different possible diseases. It might be 1 of 50 different animals. The other major category of models is unsupervised learning models. And here the objective is not prediction, but simply to create a model of the data. So unsupervised models just create a model of the data. They don't try to predict anything, but instead they give you the capability to tell whether or not an additional data record really belongs in the same category as all of the existing records that have been seen. They also give you the cap- the capability of generating new examples, generating synthetic data records that look like real data records. For example, one might make a data model that has learned all the paintings of Van Gogh. And then we would ask such a data model, please generate a new painting that looks like Van Gogh might've painted it. And these kinds of things are quite possible with modern machine learning techniques. For supervised learning, we can use- we can make two different kinds of predictions. We can either make what's called a point estimate, where we predict a value. We say, given all this data that looks like an elephant, or that looks like a dog. Or we give a probabilistic answer. We say, given all this data, probability is 20% that's a dog and 80% it's an elephant. Of course, if you've got a probabilistic estimate, you might then use it in order to construct a point estimate. You might say, well, if the probability is greater than 50%, I'm gonna decide that it's an elephant rather than a dog. One of the advantages of a probabilistic model of course is that it's more informative. And as a result, you can make decisions that take into account your own tolerance of risk. These are broad categories. It's a broad taxonomy of models. There are many other categories of models, many other types of machine learning, some of which we'll touch on or see in examples in the class. And there are also things that don't really fit in any one of these categories. Things which are between two categories, and we'll see some of those as well. Let's think about some examples. Here are some models and we can think about what kind of models each one of these particular domains would require. So for example, we have the last 10 days of rainfall data, and we know the date. So we know what time of year it is, and we need to predict tomorrow's rainfall. So how might we go about predicting tomorrow's rainfall? We might collect data from the last 10 years, and we might divide that data into 11 day chunks. The first 10 days we would use as, uh, data to index the record, and the last day would be data that the machine learning algorithm would try to learn as a function of the previous 10 days. Given many such 11 day long examples, we would hope that the machine learning algorithm would be able to develop a way of mapping the previous 10 days to the next day. Of course, what it's predicting there is an amount of rain. It's predicting a number. And there- so this would come under the category of supervised learning because it's predicting a number. It would come under the category of regression. In another example, determine from a photo of a face if the user is who she claims to be. What's the data that we're learning from? We've got a bunch of data, which is a bunch of different images of people's faces. And associated with each one of those faces, we have a name. Now, we give the machine learning predictor a new photo, and it has to find in all of those faces that it's seen before, is this face one of those faces? And that would enable it to put a name to a face. So again, this is supervised learning. This is not regression, this is classification because there's only a finite number of possible people that the user could be. And another example, estimate the probability of 10 possible diagnoses given some patient data and test results. Again, this is classification. We're trying to figure out 1 of 10 possible answers, and we're given data and test results. So this is supervised learning, it's classification. Here's a different example. We'd like to cluster customers into 22 different groups with similar buying habits. Now, this is interesting because we don't have an idea of what possible buying habits are. What we have is for each customer, a list of what they bought. But just describing somebody's buying habits as a list of what they bought, that could be anything. We think about all the different products that a store might sell. One person buys a particular subset of them and another person buys a different subset of them, two people might not have apparently much in common. And so we have to figure out, well, what makes lists of products similar? Is the right way to think about customers as some customers are really into breakfast cereal and others are really into bread? Or is it more natural to think about customers as some customers like sweet things and some customers like savory things. Or maybe some customers like red things and some customers like green things. Here we're starting to see that these are features - that are features of the model that we would like our machine learning algorithm to tell us, because we don't know what they are ourselves. This is unsupervised learning. It's constructing a model which describes all of the customers. And that model has a structure which allows us to put it into groups. And it allows, the machine learning algorithm itself tells us what the natural groupings are. Here's another example, estimate the risk of an automobile accident at a location given the hour and given the weather. This is supervised learning. We have a whole bunch of historical data about automobile accidents at that location, in different weather conditions at different times of day. And this is a probabilistic estimate that we're getting rather than a point estimate. Another example is an anomaly detector. It rates how suspicious some data is. The sort of thing we'd like to do with this so is, for example, look at network traffic. Does this network traffic indicate that somebody's trying to hack into our network? Or is it just somebody busy downloading some standard cat video? This is again, unsupervised learning. We build a model for data and we see, okay, most of the data we see looks like this. And then if we get a new piece of data, it's an anomaly if it doesn't look like the data we've already seen. That's what we mean by an anomaly. The last example is build a simulator that generates fake new data that looks like the given data. Again, this is unsupervised learning. This is Van Gogh. We've shown in machine learning algorithm a whole bunch of paintings and it's painting a new one. Of course, this has very specific applications beyond simply making nice pictures for us. For example, you'd like to make cars that can drive themselves. One of the problems with that is that it takes an awful lot of testing. And so if you have a way of generating a simulation of driving down the road, then you can use that to test your machine learning algorithm. Your machine learning algorithm that drives the car can be tested by another machine learning algorithm that generates fake road scenarios that look like real road scenarios. So let's turn now to how we might measure the performance of a model. This is the topic of performance metrics. So for example, if we're having a model, if we have a model which is predicting a real number, this is a supervised learning, this is regression. Then we might measure how far that real number is from the true real number. If we're trying to predict the amount of rainfall, then we might predict five centimeters and the real amount of rainfall when it, when it actually happens may turn out to be 3.5. And so we might take the mean square error in the amount of rainfall that we predicted. And we'd like to keep that small. It'll be a very reasonable quantity. Of course, it's square, which means that both positive errors and negative errors are counted. And if we just took the mean prediction error, then if we add an equal amount of positive errors and negative errors, then it might look like we've got a very small error. So we take mean-square. Very often we take RMS, root mean square prediction error. So that- that's measured in the same units as the error itself. But of course, RMS is a very nice error. It has lots of nice physical correspondences to things that we know such as energy. And sometimes that's relevant. For classification, we're trying to predict something which is- it only has one- only has a few different possible outcomes. So if we're trying to tell the difference between dogs and cats and images, we can measure the error rate, how often we're wrong or how often we're right, we could try and minimize how often we're wrong. Another example, if we have probabilistic models is to say, well, a probabilistic model, is a model that gives us the probability of different possible outcomes. It gives us a probability distribution. For those of you who've taken probability before, you'll know that one of the things you're interested in about probability distributions is a thing called the likelihood and then log with the likelihood in particular. That's a way of measuring how likely it was that the data that you saw could have actually been generated by the probability model that you claim generates it. We're gonna say much more about that in a few weeks. Again, don't worry about it right now if you've not heard of likelihood. So some examples, you might have a predictor that predicts tomorrow's maximum temperature and the way you might measure its performance is by RMS error. And if the RMS error is 1.3 degrees centigrade, we may or may not be happy with that. You might have a classifier that predicts the topic of a newspaper article from a set of 50 choices and an error rate of 5%. So when we do machine learning, we're always doing machine learning from data. And usually, we'd like as much data as possible. And we also have in mind that we need to test the machine learning model, a predictor against data that we've never seen before. And so the standard way of doing this is that when one collects a whole bunch of data from experiments or from the field, when cuts it into two parts and puts one part aside, never looks at that part. The first part of the data we use to train the model. We use it to develop the predictor. And once we've developed the predictor, and then we've taken it to the point where we're happy with the predictor and we think it's going to work well. We test it on this reserved piece of data, this validation dataset, to see how well it did on modeling, on data that it's never seen before the data that one keeps aside is called the test set or the validation set. So now we have two performance metrics. We have how well did it perform on the training set? And how well does it perform on the test set? Sometimes one can develop a model that performs really well on the training set, but very poorly on the test set. An easy way of doing this is that if you're trying to tell a trainer system to tell the difference between dogs and cats, it could just remember all of the images that it's seen, until you, oh, that's one you told me was a cat, and that's one you told me it was a dog. And it doesn't help it at all when it sees a new image that isn't any of the images it's seen before. In general, if you've got a model that does well on the training set, but poorly on the test set, you say it's over fit. It's learned properties of the training data that are specific to that particular data set, rather than general properties of the underlying phenomena that you're trying to learn. On the other hand, if you've- you've developed the model using the test set and it works on the training set, well then you might say, well, it worked on that data which you never seen before, maybe it will work on some other data too that it's never seen before. So there is- there are a number of different ways one might choose a model. Um, one of the common methods is to say, well, I'm going to choose a class of models. I'm going to choose a model structure or perhaps a form of model or a type of model. I'm going to say I'm only interested in models which are linear regression models. So I'm only interested in models which are convolutional neural networks, or I'm only interested in models that are trees. Those models are described by numbers, by parameters. What you do is you say, okay, well, I'm going to define a loss function, a figure of merit, which tells me how well the model performs on one single data point. It might be, for example, the square of the error between the predicted amount of rainfall than the actual amount of rainfall. And then we're going to choose the parameter values to enter into the model by varying them, and finding the ones that minimize the average loss over all of the training data. This is a very general scheme and it's extremely widely used in Machine Learning, and it's called empirical risk minimization. And you can use this to fit all sorts of models, and all of the models we've talked about so far in this section can be fit using empirical risk minimization. Let's look at this diagnosis example. Here's the goal is to predict if a patient has a disease based on whether or not he or she exhibits 10 symptoms. Well, the historical data consists of is a large number of patient records. Each record contains 10 Booleans specifying the presence or absence of the 10 symptoms. And it contains an additional Boolean that specifies whether that particular patient had that particular disease. We have a very large number of these records, each record has 11 Booleans. The Machine Learning algorithm ingests that data. It learns for a while. And then it comes out with a predictor. What that predictor does for us is we take that predictor and then we give it 10 new Booleans corresponding to some other patient who would like to know whether or not they have that particular disease. And it returns for us, a single Boolean, which is the prediction of whether or not that patient has the disease. This is supervised learning, this is a classifier. We're predicting an outcome that takes only two possible values. And we're going to judge the model by its error rate, because you can only give a true or false answer. And we'll do that by having a separate set of test data, set collection of 11 Booleans, which we didn't use to generate the model. A probabilistic model would return a probability that the patient had the disease and not just a Boolean. And we might immediately think about how to make this more sophisticated. We wouldn't just give it 10 Booleans describing symptoms. We'd also give it other patient data. Test results, demographic facts, information about where the patient lives, what the patient's habits are, all sorts of things. And you might give it random stuff. Patient's favorite color, patient's shoe size. Patient's favorite movies. Because you don't necessarily know what's going to make the predictions better. What you'd hope is that your Machine Learning algorithm could distinguish for you what's meaningful and what's not. And we will see the Machine Learning algorithms can in fact do that. They can tell you that that data was actually uninformative. Now here's another example. This is a classic example. This is called the MNIST, M-N-I-S-T data set. It's a collection of 60,000 images, and these are all hand written digits. You can see in these images there's quite a lot of variation in the way different people write the same digit. Our prediction algorithm would take all of the 60,000 images and learn a classifier. It would give us back a prediction given a new 28 by 28 pixel image, it would give us back a prediction as to what digit was in that image. So this is a, uh, very well used, um, [NOISE], uh, training set, test set, uh, data set. Uh, it's- it's considered EC, by modern standards. Uh, you can do very well with, uh, uh, very simple algorithms. [NOISE] And we will learn how to do that in this class. As well as talking a little bit about data, um, uh, because machine learning is such a widely used discipline. At the moment, ah, there's been a huge amount of effort to collect data. Um, partly to, ah, partly by the research community and industry, um, to develop better machine learning algorithms. And partly because people are trying to apply machine learning in very specific disciplines in order to- to develop useful products, useful tools. Ah, so one of them is- is a thing called Kaggle. Kaggle is a- is a website, uh, it began as a startup. It was acquired by Google a few years ago. And what they do is they collect data- datasets, and they won competitions to, ah, see how well people can develop predictors. And they do the job of holding validation sets in escrow so that people can't cheat. And you can sign up for Kaggle, just give them your e-mail address. You can download the dataset, you can see how well you can do it doing prediction on the te- on the training set. And then you can submit your predictor to them to see how well you do on the validation set and compare your performance against the best- the state of the art. So Kaggle has many different datasets, we will, uh, in this class, we will set homework, quite a few of the datasets will come from Kaggle and, ah, it's a very useful tool for- for learning and for developing machine learning algorithms. ImageNet dataset is a- a- a very popular dataset, it contains 14 million images of a variety of different things and in many different categories. And it's, ah, used for classification. Street view, Google Street View has a whole bunch of house number images, 600,000 images of digits. This is much harder than the ImageNet dataset because these are all photographs of the numbers on the front of houses. And you're probably aware that those things come in all sorts of strange formats, curly letters, strange shape tiled images, pretty colors, all sorts of strange borders around them. And so identifying those kinds of images it's much less structured, much less clean data one has to work with, and one can still do very well with the methods that we'll show you in this class. Um, as you know, many people are working on self-driving cars, um, we're fortunate in that Lyft has open sourced some of their data. And that's a source of data that one can use to develop a very specificly focused algorithms for perception for self-driving cars. And in particular, there's, one would like to identify the things one sees on the road: pedestrians, bicycles, other cars, trucks, road signs, traffic lights, these kinds of things. Ah, that data was interesting because it doesn't only consist of images and not even single images, but it consists of a multiple different sensors observing the same thing. So you have on a- on a Lyft self-driving car, I think you have six cameras and three LIDARs. And so the LIDAR images give you a rang- range measurements as pixel data. And of course, cameras give you color measurements as pixel data. Um, and so one can develop machine learning algorithms that use both types of data simultaneously. We can also develop machine learning algorithms that take advantage of the fact that the image one sees now has something to do with the image one sees 10th of a second from now as the car moves down the road. And so there's a lot of scope there for a developing things that are- are very much subjects of current research, but also very likely to be used in practice. And there are many other large datasets on line now, um, and a simple, pick any topic that's on your mind, you can Google for datasets, and, uh, there has been a- because of the, ah, surge in interest in machine learning, many datasets have become public. And so one has data which were previously have been hard to get and held by specific companies that kept their data private. Now, those datasets are being made public because everybody benefits if we develop better algorithms to understand such data. I should say a word about software, um, uh, much effort has gone into developing machine learning software. Ah, these here I list Torch, Keras, Theano, TensorFlow, um, scikit-learn, Spark, MLlib, Flux. These are all different packages for doing machine learning. Um, they're all different languages, they're in R, Matlab, Python, Java, Julia. We're gonna be using Flux in Julia. Ah, many of these packages support, ah, GPU acceleration. That's useful because it turns out that when you're trying to learn, ah, image classifiers, for example, that uses quite a lot of processing power. And so if you- you can gain a lot of benefit and reduced training time if you go- if you're using a package that can take advantage of GPU acceleration such as flux. Also, many of these packages will allow you to run training on the Cloud. So if you don't have a very powerful computer at home, you can run training on the Cloud. Um, as you'll see in this class, the- even though there are these complicated packages, very extensive software packages. In fact, you can write the software to do machine learning yourself. We will show you how to do it and so that if you wanted to, you could write everything yourself from scratch. We don't recommend you do so. Lots of time has gone into making these packages fast and code you write will probably not be that fast. Um, in fact, the general philosophy of this class is to have you write as little code as possible. Um, the class is really about how machine learning works, how to use these tools, how to formulate machine learning program- problems, how to think about machine learning problems, how to- how to appropriately featurize datasets, how to think about what type- what types of model one should use. It's much more about those things than it is about how to effectively implement the algorithms. We will teach you how to effectively implement the algorithms. But really the purpose of that is to demystify the way these packages work rather than so that you should go and do it yourself. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_4_validation.txt | Welcome to the validation section. So we've seen so far several different predictors, which given an x, can predict for us what the corresponding value of y should be. And I want to talk now about how we're going to evaluate different predictors. We have seen k-nearest neighbor predictors, we've seen tree-based predictors, we've seen neural network predictors, we've seen linear predictors. And so somebody has come along and said, well, here's my favorite predictor which I designed. And, uh, you've got to decide, well, should I use that predictor or should I use some other predictor? So one of the key ideas is that we need a- what's called a performance metric. And what it is is it's a measure of how large prediction errors are. And so what we can do is we can say, well, we've got a dataset, x_1 through x_n, and y_1 through y_n, and to reach one of those data points, we can evaluate the predictor g at the ith data point, x_i to get an prediction y hat i. And I ask the question, how close is y hat_ i to y_i? And the performance metric is going to be the measure of how close y hat_i is to y_i. And normally it's designed so that the smaller the metric, the better the prediction performance. It's an error measure. Some people call it prediction performance metric, the prediction error as a result. So there are a few which are very commonly used. The first one is the mean square error. So this here, uh, is the two-norm. Let me just mark it here. See the subscript 2 there, what it means is it means- means the following. If you give me a vector x, then that's equal to the sum from i is 1 up to n x_i squared or square rooted. And the subscript two there indicates which particular norm we are using. Um, there are other norms that we might use with different subscripts. So for example, we might use the 1-norm which is just the absolute value of each of the components of x added up. And we'll see a few other norms in this class as well as a few other performance metrics in this class. So if y_i is a vector, an m dimensional vector, then if you give me a particular y_i and a- and a particular corresponding prediction y hat_i, I can compute its 2-norm as, uh, a measure of the error between y hat and y_i. And then I can average those up, and, uh, so sum those up and divide by n. And that would be a- a prediction performance metric. And of course, if y_i is just a scalar, so m is 1, then that's just the mean square error, the mean squared difference between y_i and y hat_i. Very often instead of working with the mean square error, we work with the root mean square error. That is the square root of 1 on n multiplied by the sum from i is 1 to n of y hat_i minus y_i squared. Uh, which is, uh, convenient because it has the same units as y. And, uh, another, uh, very useful, uh, error metric or performance metric is the mean absolute error. So that's 1 on n, the sum from i is 1 to n of the absolute value of y hat_i minus y of i. If you have scalar positive y's, then we define the mean fractional error as the average of the absolute value of y hat_i minus y_i divided by the minimum of y hat_i and y_i. And this is something like an average percentage error. If y hat_i is greater than y_i, then the absolute value of y hat_i minus y_i divided by the minimum of y hat_i and y_i is the percentage by which y hat_i is greater than y_i. There are many other common performance metrics. Uh, the median error, um, I said are the ones very commonly used and we'll see many others in this class. So now we've got a prediction performance metric that allows us to compare different predictors on a particular dataset. So, for example, we might conclude that k-nearest neighbors with k is 7 does better than k-nearest neighbors with k is 12. We might conclude that a particular neural network does better than a particular linear model. Um, several things to notice about these kinds of conclusions, they depend on several things. First of all, they depend on the performance metric. It is quite reasonable and quite common that you can have a predictor that does better on one performance metric, but does worse in a different performance metric- performance metric. So the neural network may do better than the linear model when using the RMS error, but may do worse than the linear model when using the mean absolute error. So you got to choose the performance metric carefully so that it corresponds to something that you actually care about. Another important thing to notice about performance metrics and using them for analysis of predictors, is that they are predictor agnostic. And that's a very good thing. It means that when we evaluate the performance of a predictor, we're not evaluating it using the same software that was used to develop it. We're not evaluating it on the basis of the properties of the learning algorithm. We're evaluating it on some objective notion of performance and anybody's predictor could be evaluated using the same performance measure. So if somebody else has a method of generating a predictor, even if you don't know what that method is, you can evaluate that predictor by evaluating the quality of their predictions. And all that should matter is whether or not their predictions do better, are better than the predictions given by some other predictor. Another thing to notice about comparing predictors using a performance metric is that it depends on the dataset. And so all you can do is you can use the performance metric to evaluate the performance of a predictor on a particular dataset. And that leads us to the question of generalization. Suppose we have a predictor and it performs well on the dataset that we use to train it. Can we conclude that it's gonna perform well on a- a dataset that it's never seen before, unseen data? And if it does, that's called generalization. Generalization is the ability of a predictor to perform well on unseen data. And unseen here means that the data was not used to create a prediction model. It was not part of the learning dataset. The person who developed the predictor never looked at that data when they were developing the predictor. So we'd like to answer the question of, when we can infer that good performance of a predictor on one dataset implies that the perfo- the predictor will perform well on a second dataset? And in order to do that, one would need to make some probabilistic assumptions. For example, one might say that both sets of data are samples from some underlying probability distribution. With some kind of probabilistic assumptions like that, we might well be able to conclude that performance on one dataset says something about performance on another dataset. For example, if we have two sets of data, both sampled from the same distribution, we might reasonably conclude that the mean of the first set of data and the mean of the second set of data should be very close provided we have enough data. So there is a- a framework for doing this kind of analysis, we will not discuss it in this class, it's a more advanced topic. But instead what we will see is we will see some practi- practical methods. Some methods for actually assessing whether or not the predictor that you've got actually generalizes. The fundamental thing to do, of course, if you want to know whether your predictor generalizes to a new set of data is to try it out on a new set of data. And so we think about having two sets of data. One we call the training data or the in-sample data, and that's the dataset that we use in order to construct the predictor. And then we have another set of data which we call the out-of-sample data, that's the unseen data on which we are going to test the performance of the predictor. And if the predictor performs well on this unseen data, we say the predictor generalizes. It makes good predictions on data it has never seen. If it doesn't, we say it fails to generalize, it's over-fit. This terminology over-fit is quite evocative and we will have much more to say about it. Okay, here's a simple example. This is data downloaded from the federal government. Uh, these- this plot shows the number of vehicle miles traveled in each year over a range of years from 1970-2005. And so vehicle miles traveled is the total number of miles traveled over the year by all of the vehicles. Um, and on the left here, we have a subset of the data. We have 12 data points. I will just highlight like that. And what we do with those 12 data points is we fit a straight line predictor. Y hat is Theta_1 plus Theta_2 times x. Where y hat is our prediction of the number of vehicle miles traveled, and x is the year. And we'll just choose those parameters using least squares. That gives us this nice straight line fit here, which goes all the way up there. And then we can say, well, um, let's look at the rest of the data. The data that we held aside, which we only fit this straight line using 12 blue sample points. We've got another 14 data points. This is these 14 data points right here, and we can see that those 14 data points actually lie rather close to the straight line of fit. And so the predictor actually does generalize to those additional points. Of course, in this simple example, you can plot it all and see it all by eye. It's very simple in two-dimensions, and of course, we'd like to be able to work with very large datasets and a very large number of dimensions where it's no longer possible to do such simple plots. And that's why we need to use the tool of a prediction error metric, prediction performance metric in order to evaluate how well it perform- the predictor is doing on these different datasets. And there's a specific approach to doing this. In fact there are several specific approaches to doing this kind of analysis. The first one is called out-of-sample validation, and it goes like this. Very often we don't have datasets which, uh, naturally fall into two categories. Data that we use to, uh, do the training and data that we use to test. And so what we do is we split the data. And so we'll download some dataset, it may be a whole bunch of images, and we'd like to develop a predictor that can, uh, look at an image and tell us whether that's a cat or that's a dog. And, uh, we take this data and we divide the data into two datasets, the training set and a test set. And often we do that randomly so we'll use 80, uh, an 80-20 split where 80% of the points we use for training and 20% we use for test. Now we might use 90-10, it doesn't really matter very much. The exact, uh, split is not very important and the results are quite insensitive to that. What we do with the training set is we train the predictor. We use it to choose the predictor. And then the test set is used to validate the predictor, to evaluate how well the predictor performs using the particular performance metric we've decided upon. And that's the result. You have an honest test, you have an honest simulation of how the predictor works on unseen data. And we hope that because it worked well on data that was not used to train it, it will also work well on other unseen data that we haven't seen yet at all. And this hope is founded on the assumption that all of this data looks kind of the same, the data that we used for training, the data that we're using for testing and future unseen data, they're all kind of similar. Very often we do this split in a random way, uh, for two reasons. Um, one is, is that we want to get the training right. And so, um, if I'm for example, outside taking photographs of cats and dogs in the hope to develop a predictor that can tell the difference between them. Well, during the day the Sun might go down, the clouds might come over, it might become quite overcast, the conditions can change quite significantly. And so if I simply take the last 20% of the images, then I could end up with a test set which is all cloudy images, and a training set which is all sunny images. And that will both upset my training, it could well be that my predictor will not learn how to distinguish cats and dogs under cloudy conditions, but only under sunny conditions. And it might upset my test in that all I will be doing is testing the predictor in cloudy conditions and I won't measure the performance in sunny conditions. So the- the fact that you make this split random helps to avoid these kinds of, uh, biases and errors which are a result of a- a particular ordering of the data in the dataset. Now what matters is the performance on the test set. The training set performance doesn't, um, really matter at all. Um, we expect it to be good. And in fact, we usually expect the, uh, the training performance to be better than the test performance. Um, usually the test performance is only a little bit worse than the training performance. Um, sometimes the test performance is still okay, but actually quite a lot worse than the training performance. That's fine. If your test performance is okay, that's what matters. You can be in a situation where the training performance is perfect. For example, one nearest neighbor's predictor. The one nearest neighbor's predictor, if you give it an element of the training set, one of the x_i's, it predicts the corresponding y_i corresponding to that x_i, because that is the closest x to that x_i. And so it gets zero error on the training set. But it still makes useful predictions even for, uh, data that's not in the training set. But the training error is no indicator of how well it will do on the test set. So let's look at how we interpret validation results. We have two measurements. The first is the performance on the test set, that's really what matters. And the second is the performance on the training set, that doesn't matter. And so we'll get four numbers. We'll plot these in a table here. Here in the first column, we have the training error, we have a small training error. And in the second column, we have a large training error. And in the first row, we have small test error, and in the second one, we have a large test error. So this top left entry here is really good news. It means we got a small test error and small training error. So it performs well when we did our training and then when we tried it out on data we haven't seen before, it did well there too. And so it generalizes. It is possible to be over here. To be in the case where we have small test error, but large training error. That is luck or perhaps we've cheated, and there's some kind of fraud involved. Um, it doesn't happen very often because when it does the training and when it sees a large training error and typically we don't even bother to try it against unseen data. If we can't get it to work on data that we can see, we're probably not going to get it to work on data that we can't see. Um, so this doesn't happen very often. Uh, if we have a large test error, the bottom row, well, then there's two possible explanations. One is where, uh, we have small training error, but we had large test error. Um, this is the failure to generalize. We thought we'd do well with a good predictor, it seemed to do well when we were training it. Um, but then we tested it on data we've never seen before and it didn't do well. Um, we would say such a predictor is over-fit. And then the worst case of all, we have large training error and large test error. It generalizes okay. You know, it does the same thing on the test data that it does on the- the training data. But it doesn't do very well on the training data. So how do we choose between different candidate predictors? Two people who have come along, and they've both- they're both experts in machine learning, and they come to us with their predictors, but they've developed them using completely different methods. We don't even get to see their code. We can do validation. We take their predictors, and we try their predictors on data that they've never seen before. And typically, what we're gonna do is we gonna choose the predictor amongst all of those candidates, which has the smallest test error. That's not always what we do because sometimes we're willing to back off a little bit on that requirement. We might accept a little bit larger test error if that gives us a particularly simpler predictor. Uh, there are good reasons for this, and we will have more to say on this later. And let's look at a particular example. Here is simply a one-dimensional example. Um, we have, uh, a- a- a dataset with 30 data points. Um, 20 of which we'll use for training, and 10 of which we'll use as the test set. Here on this plot, we see 20 blue points, those are the training set, and 10 red points. So here's the performance of two of our favorite classes of predictors on that dataset. Here on the top row we have the k-nearest neighbor predictors when k is 1, 2, and 3. And on the bottom level we have the affine predictor, the quadratic predictor, and the cubic predictor. We can see right here that the- in the- in the top left plot the k-nearest neighbors when- predictor when k is 1, does perfectly on the training data. Let me just highlight that. It passes perfectly through the blue data points. If I look at the k is equal to 2 plot, well, that doesn't pass perfectly through all the data points. But it's a bit smoother than the k is equal to 1. And k is equal to 3 also doesn't pass through all the data points, but it's a little bit smoother than k is equal to 2. Now the affine predictor here, that's the best straight line fit. Here's the quadratic predictor, and here's the cubic predictor. Just looking at these. Now, let's look at the RMS performance error. So we're using the RMS error here as the performance metric. And we can say, well, which is the best prediction model? Some things to highlight, well, first of all that number right there is 0, the k nearest neighbors predictor when k is equal to 1, that's perfectly on the training data. Doesn't- when we increase k is equal to 2 and k is equal to 3, actually, the performance gets worse on the training data. Um, and on the test data it changes a little bit. It goes from 0.1 when k is 1 to 0.08, when k is 2, back up to 0.1 when k is 3. When we look at the polynomial predictors, we see something which is quite interesting. With an affine predictor on the training data, we have 0.08, and then when we go to the quadratic predictor, the error decreases. This has to happen, because we're optimizing over the best possible quadratic predictor. And of course, affine predictor is a quadratic predictor, it's just a very special one. And so the quadratic predictor is comparing all possible quadratics, including all the affine's, and so it has to do at least as well as the best affine predictor. So when we go from affine to quadratic, the performance has to get better. And the same when we go from quadratic to cubic. Every quadratic predictor is a cubic predictor, and so the best cubic predictor must do at least as well as the best quadratic predictor. And so the train- the training error has to decrease as we go from an affine to quadratic to cubic. Those statements are true for the test data. We do see that as we go from affine to quadratic to cubic, that the performance on the test data does get better. But there's no guarantee that that would happen, and it doesn't have to happen. So I have all of these predictors, which one has the best test error? Is this one right here. That's noticeably better. That's the cubic predictor. It's test error is 0.025. The next comparable ones are the quadratic predictor and the two nearest neighbor predictor. But that is so substantially worse at 0.08. So we might decide to, uh, go for the best- the cubic predictor, which does well in test and it does well in train. And so it both performs well and generalizes well. We have the k nearest neighbor with k is equal to 1, which performs well on the train, but it doesn't generalize so well. Let's look more generally at polynomial fits. So suppose we have a scalar u and scalar v as our raw data. And then we use a feature mapping where we construct x's from u's by x is equal to Phi of u. And for each u, we construct a d-dimensional vector consisting of the powers of u, 1, u, u squared up to u to the power of d minus 1. And then we're gonna construct a linear predictor, g of x theta transpose x, a linear combination of those d monomials, a polynomial of degree d minus 1. And we're gonna choose the Theta by least squares. And here's the kind of thing that we see. Here, we have 60 data points and we are looking at the predictors with d is equal to 6 on the left, in the middle d is equal to 12, and on the right, d is equal to 14. So these are degree 5, degree 11, and degree 13 polynomial fits of the data. Now, if we look at the RMS error, the one that has the smallest RMS error is the degree 13 plot. And that's not surprising at all for the same reason, the cubic did better than the quadratic in the previous example. The degree 13 has to do better than the degree 11, which has to do better than the degree 5 against the training data. Now, we'll take those same 60 data points and we'll split the data points into 48 training points and 12 test points. Now, for each degree d, between here, we've got degrees between 0 and 14. What we're going to do is we're going to train a predictor on the 48 data points. And then we'll compute two numbers. It's RMS error on the training set and its RMS error on the test set. So for degree 2, that's this number and this number. And then we do it again, say degree four and we end up with this number and this number. We expect that the training error is going to be less. It's gonna be less than the test error. And it is at every different degree that we see here. But it doesn't have to be that way. These two- these two plots can certainly cross. Um, it usually happens that we do better in training than we do in test but not always. Now, what happens on the training set as you increase the degree is that the training error has to decrease. Increasing the class of functions, increasing the set of functions over which we're optimizing means we're going to get better fits. And so you can see that every time we increase degree, the training error decreases. The test error certainly doesn't have to, and it doesn't. Now, if you just looked at the training error, you might conclude, we should conclude- we should use a degree as high as possible. So we should use d is equal to, uh, 15 here. And, uh, however, if you look at the test error, that suggests quite a different story. That suggests that we should use degree 5. What's happening out here to the right is we're seeing over fitting. We're seeing predictors that are doing really well on the training data, but much less well on the test data. Let's look at our plots again. So here we can see it. You can just about see that really this data is kind of noisy and there's plots- points scattered everywhere. But here, we've seen these little wiggles showing up. Let's look at our polynomial fit again to see if we can see evidence of overfitting. So there's two things that show up on this plot that are interesting. One is that there seems to be quite a lot of noise in the data. What do I mean by that? If we look at just a particular region, just look at this region. If I look at x's within this region here, then we can see that vertically, the value of y still tends to have some variability. It's not a smooth curve like the predictor would have us believe, but instead, the points jiggle up and down a lot. And that suggests that those points come about- that variation in those vertical positions of those points comes about due to noise. Now what's happening with regard to our predictor? Our predictor is doing something interesting. Our predictor in this degree 5 plot is just sort of averaging out the noise, is doing something rather gentle. It's just moving gently through the center of the cloud of points. But this predictor over here at degree 13, we can see that it's starting to wiggle. And the reason it's starting to wiggle is that it's starting to believe in the noise. It's starting to try and fit the noise. And this is what we mean by overfitting. The learning algorithm is fitting a predictor to features of the training data which are due to noise or variation, which isn't present in the test set. And that's what we see, is that as we move down here, we're getting better and better fit as we fit more and more closely the features in the training data. But the reality is- is that with test data, we're gonna move up here and we're gonna find that those features don't actually exist in the test data. And they are going to give us greater error in the test data that we are led to believe by the performance on the training data. Now we can do several more, slightly more sophisticated things than simply split the data into a test set and a training set. One extension is to split the data more ways. So we split the data into k different subsets. We'll call them k folds of the data. And this is called k-fold cross-validation. And so we have k different buckets of data, k different subsets of the data. And what we're gonna do is we're gonna go through the sets of data, and the first time we fit, so let's draw say- say suppose k is 5, and then we'll have five different subsets of the data. There's one, there's two, there's three, there's four, and there's five. The first time we fit, we will fit using those four and we'll test using that one. The second time we fit, we will- we will fit using this one, those four and test using that one. The third time, we'll fit using those four and test using that one and so on. So we'll have each time we do a run, we will train using four out of our five buckets of data and we're going to test using the other bucket of the data. As a result, we get rather than having just one test error and one training error. We have five different training errors and five different corresponding test errors. These five different test errors we can look at. We can work out the mean test error, the standard deviation of the test errors, and just get a sense of their variation. If they are all small, then we can feel rather confident that actually this predictor does well. Conversely, if there is a whole lot of variability, then we start to feel a little less confident. And of course, we haven't really got one predictor. We're going to have, if k is 5, five different predictors. And so we can look at the different Thetas, the different predictor parameters. If they're all very similar, that's another reason to feel confident that our methods are giving us a certain amount or uniformity. We're getting the same predictor for each one of our different subsets of the data. Well, on the other hand if the predictor parameters vary wildly, well, then we're less confident that we've got a sensible choice of predictor coming out of our method. Here's an example. This is just randomly generated data. We can see here on the left, we have fitted a straight line through the data. So this is determined by two parameters, Theta 1 plus Theta 2 times x is the predictor. And- and then we do, we split the data five ways and do five-fold cross-validation. And that gives us five different results, each of which is a training loss and it has the training loss and a test loss. And we can see that- well typically, the tests loss is not that different from the training loss. That's a good sign right there. And, ah, there's some variability in the test loss. The test- smallest test loss we see here is 0.0027, the largest test loss is 0.0071. But this that's- either way it's still very small. The Theta parameters, we will also see some variability there, in particular in Theta 1, but it's rather a small number again. It's going to be 0.003 up to 0.012- up to minus 0.012. And Theta 2 is about 1. You can see that, of course, when we plot the different predictors, each of the different predictors, those are really five different plots here on this curve. There are really five lines on top of each other. But they're all so close to each other that we can't really tell the difference. Of course, when you're in high dimensions, you can't necessarily plot your predictors, but you can look at Theta 1 and the different Theta parameters, and you can look at the training loss and the test loss. We might want to be even more confident than simply having five different results. Here's how you might do them. You take the data and split it into a training and test split, say 80:20, but completely randomly. Train the predictor using the training data and then evaluate the predictor on the test data. And then repeat it again a thousand different times. So here's how you gain- might gain a little bit more confidence. You might split the data into a training and test split, say 80:20 randomly. Then, you train the predictor using the training data and evaluate it on the test data. Now, you repeat that a thousand times. And then, you'll have a thousand different test errors. You can plot the histogram of those test errors and see how much variation is there. And we can see right here that the mean test error in this example is about 0.05. And if we look back to our previous data, that's not that inconsistent between where we see- saw errors between 0.027 and 0.071 and there's a couple here around 0.05. And so really we can be confident that when we try this on new test data, we would expect to see something around 0.05. At least if our lo- if our entire dataset is representative of unseen data. And we can also see that there's gonna be some variation even with a dataset, the size that we have, we're gonna see- sometimes we're gonna see some significant test errors of 0.015, 0.02. Okay, one last topic in this section, and that is what do you do once you've chosen a predictor? Well, one thing you have to be careful as- of is that you've revisited this test set too many times, even if you kept one test set in escrow, you trained based on the, on the training set. And then, you evaluated your predictor on the test set and you had decided that the predictor wasn't very good. But suddenly, you're taking information from that test set and using it in your training procedure. And when you do it again and again and again, information is leaking from the test set into your training procedure. You're learning based in part on the test set. And so it's no longer really an honest simulation of how well the model would do on data you've never seen. Of course, there's a trick to avoid this. And the trick to avoid this is to split the original data instead of into two datasets, into three datasets. The training set, that's how we fit multiple candidate models, and validation dataset which we evaluate the performance of our models on, and then a test dataset which is pristine, is untouched. We keep it to one side and we never look at it or we look at it once. That tells us how well we've done. But then we don't go back and chain our- change our predictor once we've looked at it. Um, this of course, is a little bit more honest. Um, uh, and some people would say that it's taking it to extremes. Um, and this really depends on how much leakage of information there is from your test set into your training set, from your validation set into your training set. Some practitioners do this, others do not. Also, the names of test and validation are not really well settled. Some people reverse the terminology and refer to the pristine set as the validation set and the test set as the thing that's used to evaluate the performance of models. In this course, we won't go to this extreme, we'll simply use one test set and one validation set. And we'll use out-of-sample five-fold cross validation. One more thing you can do is that we are satisfied. At the end of the day, you've do- you've done training, validation and test, and you've got a predictor that you're pretty happy with. Now, you could just stop there and say that's a predictor. We're going to use it and we're happy with it. Another thing you can do is you can say, well, what we've already validated is not the predictor, but the procedure, the learning algorithm, which we're using to develop the predictor. And so why not just take that learning algorithm, which we're happy with and now apply it to the whole dataset, so that it can learn from all of the data we have not just the piece that we used for training. And that is a very common practice, um, and, uh, uh, there's nothing wrong with it. It works very well, uh, many people do this. Um, so for example, you might train, uh, k-nearest neighbor predictors for various values of k. Uh, validation suggests that k is 6, is a good choice. And now, the final predictor you supply is a k-nearest neighbors predictor with k is 6, but it uses all of the data. [NOISE] Okay, in this section, we've talked about evaluating predictors. In the previous section, we talked about different predictors that we could have. Um, we've yet to talk about how you might make predictors and how you might learn. And that's, of course, coming in this class. But right now, you're in a position where if somebody gives you a predictor, you can decide whether it's good or not. But also, you're in a position where the stage for the co- for this course has been set. We know what we're trying to do now with this class. We're trying to develop predictors. We know how, once we've got predictors, how we're gonna evaluate them, how are we going to tell good predictors from bad predictors. And what's next is to, uh, go ahead and make some. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_14_Boolean_classification.txt | Hello and welcome to the section on Boolean classification. So we've already seen the core idea. We have a target variable which is categorical, which we embed as representatives in Euclidean space, and in the Boolean classification case, we will embed them in a one-dimensional Euclidean space, the reals, as plus or minus 1, so true might embed to plus 1 and false to minus 1. And then we use regularized empirical risk minimization to fit with various loss functions and regularizers. And then we use the Neyman-Pearson metric without a choice of kappa to express our preference for false negatives and false positives, and we can validate using that as our performance metric, and of course when kappa is 1 this reduces to the sum of the false-negative rate and the false positive rate, which is the error rate. Now I want to look now in more detail at possible loss functions that one uses for Boolean classification. Now, here y can only take values minus 1 or 1. And so really there are only two scalar loss functions. So normally, when we think about a loss function, we have a function which is a function of y and y hat, and the loss function is going to be small or 0 and y is equal to y hat, so along this line, and it's going to be increasing as we move away from that line. But in our case, y only takes two values, and so there's y equal to 1 and y equal to minus 1. So I've got a function which in theory is a function of the entire plane, the loss function l of y, y hat, and y. But really it just takes two possible values for y, and so I really only evaluate the function along this line and along that line. And along that line the function is l of y hat 1, and along this line, it's l of y hat minus 1. Now we might call them something else. We might call this equal to l_a of y hat, and this l_b of y hat just to be explicit there. Even though we write them with two arguments, they really are both functions on one-dimensional space, the space of y hat. And when we look at what this function l_a or l y hat is, it tells us how much y hat irritates us when y is 1, and the other one l-b tells us how much y hat irritates us when y is minus 1, and we would look for this function here to be small at y hat is 1, and increasing away. Similarly this function should be small at y hat is minus 1 and increasing away, and that sounds reasonable based on our intuition for loss functions in two variables. But actually that's not quite what we want for Boolean classification, and the reason is, is that if y is 1, then we would like the y hat that un-embeds to give us 1 also. And that means that any y hat which is positive should give us a though loss. Similarly, when y is minus 1, we'd like a y hat that unembeds to minus 1. And so any y hat that's negative should give us a small loss, and so what we really want is we want this l_a function over here, we want it to be small when y hat is positive, we want it to be small here and growing in this direction, and we would let this function to be small here and growing in that direction. And so we've got to look at two functions that behave in the opposite way, l_y hat 1, we'd like to be large when y hat is negative and small and when y hat is positive. L of y hat minus 1, we'd like to be small when y hat is negative and large when y hat is positive. Now the way we're gonna get this is we're going to define a penalty function p and we'll have l of y hat minus 1 be just p of y hat, and we'll have l of y hat 1, be p of minus y hat. Now in order to get the property that false positives and false negatives irritate us different amounts, we're gonna scale one of those. So in fact, we'll have l of y hat 1 be p of y hat, and l of y hat 1 be kappa times p of minus y-hat. Let's look at favorite loss to date, the Square loss function. Now if we had a loss function, which was 1 whenever we chose the wrong answer and 0 whenever we chose the right answer, well, that would be this loss function right here, and the square loss function is this purple blue curve here. That's for the case when y is minus 1. Here, I've scaled on the right-hand side everything by kappa. So here kappa is 2, and so I've got a loss function which is 2, when I've got an ideal loss function which is 2, when y hat is negative and 0 when y hat is positive, and I've got the square loss function here. And what it does satisfy some properties that we like. In particular, when y ha- when y is minus 1, the loss is large when y hat is positive, over here. Use a different color there, and similarly, when y is 1, the loss is large when y hat is negative, which is what it should be. And then we have the part where the square loss lets us down. The square loss also becomes large over here. where it should in fact be small, where we'd like a loss to be small because any y hat, which is negative, should- and that's a small loss when the true y is minus 1. Because any y hat that's negative, it's gonna unembed to a y hat value of minus 1, which is exactly why, and the same over here when y is 1, the square loss grows again and that's not the property that we want. On the plus side, of course, the resulting ERM problem is a least squares problem and so it's easy to solve. Now, there's another loss function that we might consider, and that's the Neyman-Pearson loss. And here it is. This is our ideal loss function. So this is the loss function which is 1 when y hat is going to embed to the wrong value, and 0 when y hat is going to embed it to the right value. And we scale that by Kappa when y is 1. And as a result, the Neyman-Pearson loss is the loss such that when we evaluate the empirical risk, well, what's gonna happen? We're gonna score one point for every time y is minus 1 and y hat is 1, and we're gonna score Kappa points every time y is 1 and y hat is minus 1. And so it will equal to Kappa times the false negative rate plus the false positive rate, which is exactly the Neyman-Pearson performance metric that we defined before. And so this is the ideal loss function for any particular Kappa. Use this as a loss function and do empirical risk minimization with this- with this loss function, and the empirical risk will be exactly our desired performance metric. That's a good thing, but the unfortunate thing is that it's actually very hard to minimize empirical risk with the Neyman-Pearson loss. And the reason for that is twofold. One is that it has discontinuities at the origin, and those discontinuities make many algorithms, uh, for solving empirical risk minimization not work well. The other is that the derivative is 0 everywhere else. And that's one of the features that makes empirical risk minimization algorithms not work well. Now, if the derivative is 0, then if we've got a particular predictor and we change its parameters slightly, then what's going to happen is that the value of the empirical risk is probably not gonna change, because we're gonna have, uh, we're gonna move our resulting y hats, say from here to here, and in both cases we are gonna have exactly the same empirical risk. And so it's hard to have an algorithm which is based upon small changes to Theta and searching over small changes with Theta. Hard- it's hard to have such an algorithm work when one has a loss function which has zero derivatives everywhere. Um, so instead we don't use the- the Neyman-Pearson loss. And in fact, we get better performance if we use different loss functions. And those different loss functions not only give us better performance but they're actually also easier to minimize. If the loss functions are convex, well, we've already discussed how convex optimization problems are simpler to solve than problems where the objective function isn't convex. Um, of course, we need to regularize it to be convex too. Well, then there's a very efficient solution. Uh, but even if the, uh, loss function isn't convex, it's still very useful in practice and very commonly used. So here's a loss. This is a function called the sigmoid function. This is 1 on 1 plus e to the minus y hat. That's, uh, this function right here. And this function is of course the flip of it. So we've replaced y hat by minus y hat and scaled it by Kappa. And you can see that this is a nice smooth approximation of the Neyman-Pearson loss. We've smoothed it out and that's dealt with both of our problems. It's removed the discontinuity, and it's removed the 0 derivative problem. So this is a nice differentiable approximation of the Neyman-Pearson loss. It's not convex, but nonetheless, it can work well. But here is one of the most commonly used loss functions for Boolean classification. This is the logistic loss function. This is the log of 1 plus e_y hat for the case when y is minus 1. And on the case when y is 1, we flip it and we scale by Kappa. And you can see that, well, it's not the perfect approximation to Neyman-Pearson, but it does grow in the right direction. It's small on the left-hand side, and it grows as we move to the right. And, uh, the thing it doesn't do is it doesn't flatten out. All right? Instead, it keeps on growing. Whereas there's no reason in terms of a performance metric to penalize a y hat which is very large over one that's not very large. As long as it's greater than 0, it would unembed to 1 no matter what. Um, but this is convex as a result of it not, um, flattening out, and so it's both differentiable and convex. Um, and this is very commonly used and works very well in practice. The other very commonly used loss function for Boolean classification is the hinge loss. And this is the positive part of 1 plus y hat when y is minus 1. And again, the flip and scale when y is minus 1. And so it's a function which is 0 when y hat is less than minus 1, and then increases linearly. It is convex. It's not- it's differentiable almost everywhere, except at this one point right here, um, and it is very, very commonly used as well. This is the other most common choice for Boolean classification. And we might do, um, we might make up our own loss function. So here's one. Here's one we made up, which we've called the Hubristic loss, which is a cross between the Huber and the logistic. So it behaves, uh, uh, it is constant in 0 when y hat is less than minus 1, it grows like a quadratic here to, uh, approximate that, uh, change in value from 0-1, and then it continues growing linearly at the same rate in order to have a smooth derivative at the origin. Now, several of these classifiers are- are extremely popular, um, if we're using the square loss and a square regularizer that will be called the least squares classifier. If we use the logistic loss and any regularizer, then that's called logistic regression. And so that would be called, for example, logistic regression with an L_1 regularizer. Um, if we use the hinge loss with the square regularizer, that has a rather bizarre name of the support vector machine. And what that means is it's regularized empirical risk minimization with that particular loss function and that particular regularizer. All right. Here's an example. Uh, on the left here we have the, uh, a classifier that's generated using the logistic loss. There's no test set here, and there's no regularization here. We've got a lot of data points and there's no real benefit in either of those. Um, and this is just, of course, a toy example. And you can see the lo- logistic loss, um, gives us, uh, an error of 16% on this particular data set. On the right here we see exactly the same data set, only the classifier has been computed with the squared loss, and we can see very similar performance. Uh, in fact, the only difference is that this has slightly smaller error rate, 15% as opposed to 16%. Um, but that's in no way a- a general conclusion, just a particular feature of this data set. Um, and in general, one expects quite similar performance between the different losses that one might use for Boolean classification. Um, uh, sometimes the logistic loss or the hinge loss do better than squared loss, um, but as usual, the only real way to determine that is through validation. Uh, this is the support vector machine. This is the hinge loss combined with the square regularizer. And it has a- a peculiar feature that is worth observing. Um, and it only happens in this somewhat artificial case. And so it never really happens in practice. But it, it is worth at least discussing here. And that is that if you have two, a dataset with two classes are perfectly separable. So here we have a dataset where all of the blue points are up here, and all of the red points are down here, and so our classifier, our predictor is doing perfectly. It's getting zero training risk. And um, but it has an additional feature, and that is the- the- that's this line, and this line. And those lines, ah, the lines in x space with theta transpose x is plus or minus one. So theta transpose x is 1, maybe here, and theta transpose x is minus 1, maybe here or they may be the other way round, um, depending on which points are 1 and which points are minus 1. Now, why should that be the case? Now, the reason that's the case is as follows. Is that the theta transpose x is one line is perpendicular to the direction the theta points. And so theta here is a vector and it points in this direction, in which case that's theta transpose x is 1 right there. Now, what happens if I make theta larger? Well, that means that theta transpose x is 1, is going to move in a bit. And similarly, theta transpose x is minus 1 is also going to move in a bit towards the origin. And um, now if I evaluate ah, ah the predictor at my data-points, well the predictors ah at the data-points gives me y hat and y hat is theta transpose x. So all of these points have a theta transpose x value, have a y hat value and which is greater than 1 and all of these points have a y hat value, which is less than minus 1. And so, because if I look at this loss function here, right, all of the blue points which have a y hat value of greater than 1, are going to give me 0 in my loss, and then if we look at this plot here for the loss, for the red points, all of my red data points have a y hat value of less than minus 1, and so they're also going to give me 0 loss. And so as long as theta is sufficiently large and pointing vaguely in this direction, I will end up with exactly zero empirical risk. Now, what's happening, is that here we've got the square regularizer. And so that's a penalty on the size of theta. And so by minimizing the empirical risk plus lambda times the norm of theta, we're going to try and make theta small. And so the best that can happen is that we make theta sufficiently- well. So depending on the value of lambda, right, for some values of lambda, we will find that the right thing to do is to make theta as small as we can, but still keep the loss, the total loss 0. And that's exactly what's happened here. Is that when we make theta small, the smallest we can make theta, and yet have a loss of 0 is when the theta transpose x is one line has moved up, went up against this- this one data point right there. It's a blue data point. And the corresponding theta transpose x is minus one line has moved up against these data points which are right here on the boundary. And so the theta vector, it is a normal vector to the boundary between the two classes, and its magnitude is equal to 1 over half the margin. So that's the margin half width and the dish- that distance is equal to 1 over the norm of theta. Again, this is a very peculiar special case that you never see in practice because you're- you don't have interesting problems where the class is perfectly separate. Let's look now at a more practical example. Uh, this example is the Australian weather, where what we have is we have data collected at many different weather stations across Australia. And at each of those weather stations, we've collected ah, attributes of the weather, such as temperature, pressure, humidity, wind, and so on. And this data was collected every 10 years from 2007 through 2017, and consists of 142,000 or so records. And the objective we have today is that given this data, predict whether or not it will rain tomorrow. And so included in this data is rainfall. And the target variable is whether or not it rained the next day. Um, now, each, if we look at all of these data records, there turns out there were quite a few ah, ah, missing entries in the data. And so what we've done, is we've gone through each of the records of the data, and any record that has any field missing at all has been removed. And so once one does that, one only ends up with 112,000 or so records, which is still a substantial amount of data. Ah, we will see later in the class how to ah, fill in ah, missing values in- in the dataset. But for now we've just removed them. And this data, of course, we didn't collect it ourselves, this data's from Kaggle, there's a link here and you can go there and get the data yourself and play with it. Uh, the- the fields in the data in each record, ah, there's a bunch of numeric fields, those are here listed. Now, the way this data was collected, is that it's daily data. And so at each of the different sites, there are 44 different sites, there's a record collected each day, and that measures attributes of the day's weather at that site. So for example the minimum temperature and the maximum temperature or the minimum temperature, the maximum temperature over that day and the day starts at 9:00 AM and ends at 9:00 AM. Similarly, we have rainfall, which is total rainfall over the day, the wind gust speed, the wind speed at 9:00 AM, the wind speed at 3:00 PM. The humidity at 9:00 AM and the humidity at 3:00 PM, the pressure at 9:00 AM and the pressure at 3:00 PM, the temperature at 9:00 AM and the temperature at 3:00 PM. These are all numeric fields because they're all in different units and we will embed them and standardize them. We also have categorical fields, we have the location. As I said, there were 44 possible locations ah. Each of the wind speeds has an associated wind direction and those 16 compass points, things like east, southeast, um and we also have a field which is whether or not it rained today at that location, which is either yes or no. And then there's one more final field in the data and that is the ah, date on which the record was taken. So let's take a look at some of this data. Because there are over 100,000 records, we're only looking at a small 2% random sample of the data just to get a feel for it, and just looking at a couple of the features here. So here we have the minimum temperature plotted on this axis and the maximum temperature plotted on this axis. And we've plotted in blue those points for which there was rain the next day and red those points in which there was no rain the next day. And you can see that there is some indication here, um over here, that's, ah, over here, we're getting much more chance of rain the next day than over here. And and so the data does indicate that although by no means is there a clear separation. Over here we look to the 3:00 PM temperature and the 3:00 PM pressure. And we can see again that when both of those are small, there's a chance of rain tomorrow, when both of those are large, there's much less chance of rain tomorrow. The way we embedded the data, well, we have 12 numerical fields and those are just embedded as they are by the identity map. Each one of those wind speeds is associated with the wind direction, uh, which we embed as one-hot. Whether there's rain today, that's simply a Boolean categorical, so we embedded it as minus 101. Uh, it turns out that the date and the location fields actually did not improve the, uh, performance after validation, and so we simply removed those. We standardize, we add the constant feature. Uh, how does this work out? Well, there are 12 numerical fields, there are 3 times 16, uh, one-hots and so that's, ah, 48 plus 12 that's 60. And then there's rain today, so that's 61, and then there's the constant feature and that gives us 62 dimensional x. The target variable is y, which is simply a Boolean categorical whether it rains tomorrow or not, and that's, ah- ah, I simply embedded it as minus 101. Now here we're gonna use the logistic loss, which we've seen before, uh, when y is minus one, the loss of y hat is log 1 plus e to the y hat, and similarly, when y is 1, it's Kappa times log 1 plus e to the minus y hat. And we use a linear predictor and we square regularization. Now, we randomly split into an 80/20 training and test sets. It turns out when you do this that, um, ah, the training error and the test error are to the training empirical risk and the test empirical risk are extremely close, and as a result, one only needs a very small amount to a regularization. And there's no point sweeping over regularization because there's no dip in, ah, in the test loss to be gained. And so as it does we just fixed the regularization amount to a very small fixed amount, something like 10 to the minus 5 for Lambda. And so here we plotted the operating characteristic, the ROC curve, and, um, you can see that there's a red curve and a blue curve and both of them are pretty much on top of each other. The minimum probability of error point on the ROC curve is right about here. That's the minimum probability of error classify. And it achieves a false negative rate of about 0.08 and a false positive rate of about 0.08, so a total probability of error of about 16%, uh, which is pretty good. Uh, it's worth comparing also with this point down here. That's a classifier which achieves an error rate of 22%, um, and the way it does that is by always predicting that it will not rain tomorrow. And that's because the data itself contains about 22% days with rainfall and the remaining 78% of days have no rain. Um, and so, uh, this is a very simple predictor, which is doing, uh, reasonably well over the prediction with an error rate of, uh, 22%. Um, of course, this is very trivial, and where that's really the baseline which we have to do better, uh, which we have to improve upon. And so we're achieving 16% which is a significant improvement over 22%. Uh it would be nice to do better than that, um, but with the- the linear predictor that we're using, uh, this seems to be the best achievable. Now, if we look at the minimum error predictor and we look at its, uh, parameter vector Theta, and then we can plot here the components of Theta, Theta i against i. Uh, this says absolute value Theta of i is not- of course, the absolute value is just Theta i. Um, and, ah, what we see is some large values and some small values. These large values here don't matter because there are 16 of them, and, uh, those, uh, 16 entries of x are the one-hot embedded values for WindDir at 9:00 A.M. And because those columns, ah, are one-hot embedded, they, uh, sum up to 1, and so this is just equivalent to a constant term in the predictor. And that's just the way the- the training worked out. We could simply subtract off this term and add it to the Theta 1 term, and, uh, it would give us an equivalent predictor. So those are the significant thing. Uh, some interesting things are important, uh, this and this. Uh, this is the pressure at 3:00 P.M and this is the pressure at 9:00 A.M. And so we've got a predictor that depends very strongly on the difference between the pressure at 9:00 A.M and the pressure at 3:00 P.M. Um, and so if the pressure is falling, so at 3:00 P.M pressure is much less than 9:00 A.M pressure, then that is a strong predictor of rainfall. And of course, this is a very well known phenomena in the weather that rapidly falling pressure indicates that the storm is coming. Um, the other features here that are significant are, um, this is, of course, just the constant term, so it's not really a feature. Um, this one right here is the WindGustSpeed, this one here is humidity at 3:00 P.M, and these two are MinTemp and MaxTemp. Since those are the significant features, we can simply say let's retrain our predictor using only those six features, the MinTemp, the MaxTemp, the WindGustSpeed, humidity at 3:00 PM, pressure at 9:00 A.M, and pressure at 3:00 P.M. And if you do that, then you end up with a classifier that achieves a probability of error on the test set of about 16.5%, which is basically the same as the classifier we had with all of these features. And so the rest of these features are irrelevant at predicting rainfall, at least with the current linear predictor. So it's possible that one could do better, maybe by using a more complicated predictor such as a neural network predictor. Maybe by using some more sophisticated feature engineering, combining features, uh, and this is simply, of course, ah, a first foray into designing a predictor to predict rainfall. Okay, so next section, we will talk about classification where there are more than two classes, multi class classification, and we will see the appropriate form of losses to use in that case. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_5_features.txt | hello and welcome to section 5 features so we now finish talking about what a predictor is and how one evaluates predictors and we can start to move on by talking about how to construct predictors and the first step in the process is taking the data in its raw form and embedding it into vectors and i'm going to establish a little bit more with a little bit more formality right now exactly how we do that and talk about a variety of different ways in which we embed data in vector space so that we can use mathematical techniques to construct predictors we have raw data pairs which we will denote by u and v where u is in some script u which is the set of possible u's and v is in some script with v the set of possible v's and here script u is the set of all possible input values and script v is the set of all possible output values and ideally our predictor is going to take an element of script u and return for us an element of script v the elements of u are called records and typically they're a list of attributes of the data um these attributes these which will index the uis are called fields or components of the data and notice here the i here is a subscript on you when we use a superscript that means the ice record when we use a subscript that means the ice field of a record and each of these fields has a type it might be a real number it might be a boolean it could be one of several different types it could be a piece of text could be audio it could be an image it could be a parse tree an output of a compiler there are also certain generic types that we talk about categorical in ordinal types and i will tell you what those are shortly um so for example you might have a um the set u be a set of houses and uh and then a record would record the properties of um a house so it might give us its address a photo of the house perhaps some photos of inside the house a description which is a piece of text a categorization of it as either a house or an apartment the size of a lot the number of bedrooms and so on the list goes on um our goal with all of these things is that we're given all of this data and we're trying to predict say the price how much did the house sell for should the house sell for and notice that each of these fields of the description of a house has a type so address well that might be text or we might be a little bit more uh precise about what an address is it might be a have a street number and a street name and a city and a county we might have gps coordinates for the location of the house the description is is text but even there we might be able to be a little bit more refined about that we might have description of various parts of the house we might have standard forms for the description of the house and so how we take this data and embed it in vector space is going to affect a lot our ability to make useful predictions using it so abstractly the uh the functions that we use to do this embedding are called the feature maps and we have two of them one of them takes u and embeds it in d-dimensional vector space and gives us x one of them the other one takes v and embeds that in m-dimensional vector space and gives us y and learning algorithms are applied to x y pairs which are just vectors and the functions are called phi and psi so phi takes a u and gives us an x and psi takes a v and gives us a y and these are the feature maps they transform data records into vectors usually feature maps work on the separate fields so we'll have a feature map that maps the photograph into a vector space we have a feeder feature map that maps the address into a vector space and so we've really got a different feature map for each of the different fields of a particular record and um and then we'll construct one feature map which takes uh the entire record and maps that into a vector space and the way it does it is by mapping each of the fields of the record into each of their component vector spaces and then joining up all the vectors into one long vector so phi subscript i is an embedding it's if it's uh it's a map which takes field i into a vector and the overall function phi is the feature map that combines all these embeddings and takes a record into a vector the the reason we do this is so that we can put all the different field types on an equal footing they're all vectors and some some of these embeddings are very very simple so if the raw data is actually just a real number such as the uh the lot size of the house then we might just embed it by using the identity map we'll just embed it according to x it's equal to u um uh sometimes uh uh a field may be boolean um and so for example uh uh it might be uh uh for a house it might be has a pool and the answer could be yes or no and uh in which case we would map that to a real number in a vector space by saying 1 if the answer is yes and minus 1 if the answer is no or we might map it according to 1 if yes and 0 if no uh color um we might map to uh three numbers uh the amount of red the amount of green uh the amount of blue and uh that's a very typical encoding for color which we use a lot for images but it's by no means the only one and uh there are quite a few different standard color encodings um there are also much more sophisticated encodings which we will have a chance to talk about later so one is this one has a text document and one wants to embed that as a vector then there's a map called the tfidf map that stands for term frequency inverse document frequency and we'll see exactly what that is and why it's useful for learning properties of text building predictors for text another one for text is called word to back we'll see that in just a second that maps individual words into vectors rather than mapping hold documents into vectors um another one is that instead of thinking about an image as a collection of pixels we use a particular feature map to map the image into vectors and these are pre-trained neural networks and we will see uh an example of this for shortly one of the things we want out of an embedding is that it should be faithful and what that means is that it should somehow preserve similarity so if you and utilde are two different records but somehow they're similar they correspond to two similar houses then when we embed them we get a phi of u and we get a phi of utilda and those two vectors should be close to each other in the vector space and conversely if phi of u is not near 5u tilde well then u and utilda should be dissimilar houses and so the the geometry and the distances within our vector space should somehow correspond to our intuitive notions of similarity for our underlying raw data and this particular notion of similarity well it depends on the application that we have in mind it depends on the field type that we're looking at for houses we have some idea of what that means they should be similar size they should be similar quality of construction they should be similar types of houses similar numbers of bedrooms we have some idea of what that means and for many other things it won't be so obvious as to what it means in any numerical sense but it might be obvious in an intuitive sense we know that for example two images that are both images of cats are similar to each other but they're distinct from an image of a dog and that's something that we're going to have trouble quantifying we'll see there are texts there are ways of um embedding in uh in such a way as to have a faithful representation of such things and we will get to those so there are lots of interesting examples one might think about where it's not so obvious how to construct a faithful embedding we can think about names of people professions companies countries languages zip codes cities songs movies and we're going to see several of these and examples and we'll also see some generic general techniques for constructing embeddings here's one suppose you have uh location data you have the position of something on the world um so you might describe that by two numbers the latitude and longitude that would give us a two-dimensional vector in r2 or you might say really this should be embedded in r3 we think about the the world as a globe and the globe is has a natural embedding in three-dimensional space and so we can embed our position in on the surface of the globe as a position on the sphere in our sv maybe that makes more sense if your data points are spread over the planet here's another example the day of the week um so we have monday tuesday wednesday thursday friday saturday sunday we could just enable label them one through seven and that would be a an embedding we should embed them uh by the day of the week into a one-dimensional vector space also on this slide we see a two-dimensional vector space this is interesting this is useful because for many types of data the similarity between days is that a day is similar to the day before and a day after and um if we were to embed monday as one and tuesday wednesday thursday friday saturday sunday is seven then one and seven would be quite far apart and so monday and sunday would be quite far apart if we embed them like this so here we've embedded them as seven points in our r2 well then we can see that the distance between monday and tuesday and the distance between monday and sunday those two distances are the same and indeed the distance between any two consecutive days is uh is the same and so this is an embedding that more faithfully captures that idea and we can do that for lots of different notions of time we can do that for um months of the year which you might embed as 12 points we can do that for hours of the day which we might embed around the circle that's 12 points around the circle now here's a more sophisticated example this is a word to beg and that's a mapping on words so you give the here the field type is a word a word in the english language and um words to vec for each word in the english language you map it will map it to a vector and that vector is a 300 dimensional vector um and so where to beg actually will work on words and short phrases it doesn't just include dictionary words but it also includes names of people and names of places and many other things and it's developed from a data set which comes from google news which contains 100 billion words and so this is not a hand constructed embedding or hand constructed feature map this is an automatically constructed embedding and we will say more about how that is done later in the class the principal idea here is that words that frequently appear near each other should get nearby vectors and um ah so we might look at that now of course it's hard to look at the specific vectors that correspond to where to vect because um they're 300 dimensional so we can't plot them but of course we can use them for all sorts of predictor construction however here we can we can just plot some of their components so here are two of their components x1 and x2 of a bunch of different words let's zoom in a bit here and so you can see even when you look at a two-dimensional projection of this 300 dimensional vector space you can see that words that sort of mean similar things are kind of close together uh let's look here we've got uh optimistic let's see if i can make that a bit smaller we've got optimistic proud enthusiastic happy satisfied eager interested amazed these are all kind of next to each other down here uh we've also got uh melancholy sorrow hostile bitter awkward guilty anguished hopeless vengeful lonely what kind of negative words sitting at the top there we've got some others we've got some uh irritated exasperated angry embarrassed frustrated scared all of these kind of similar words over there and so here are these are just a selection of words these are the emotion words of course there are uh three million words in the database um and this is just a two-dimensional projection but nonetheless one can see the what it does in terms of grouping there are also some interesting properties of word to vec some nice other types of faithfulness of the data that comes out for example if i take the vector for king and subtract the vector for man and then i add the vector for woman then the closest vector that i get to the result is the vector for queen similarly if i take rome and subtract italy and add france then the closest vector i get is paris and so these kinds of things are somewhat surprising properties of the embedding that are quite magical but illustrate the point that somehow the embedding is capturing the structure of the relationship between words um which is uh what's going to enable it to be useful when we actually use it for learning let's turn now to a sophisticated method of embedding image data and uh this is uh called vgg16 so imagenet is an open database of images it contains about 14 million labeled images in about a thousand different classes and they range over all sorts of different things they include many images of animals of vehicles of everyday objects shoes that's a very broad collection of images but they've all been labeled and so we know exactly what is in each of those images so vgg16 is an embedding which maps images um to a vector in rd and here d is 4096. so we get a 4096 long vector and the images that we start with are all cropped down to 224 by 224 pixels their color so they have rgb components and that is uh some uh 150 160 odd thousand components 160 odd uh components of each image and we have three numbers per pixel and so we're going from what we might think of as a as a vector in 160 000 dimensions and we're mapping it down to a vector in 40 96 dimensions and vgg16 it's a neural network and it was originally developed to classify the images and so it learned from the imagenet database the uh a way of mapping images to labels but it's also been repurposed as a general method of constructing a feature map it's a general method of embedding images in vector space now this particular neural network has 16 layers and its input is an image and its output is the vector in r4096 let's look at what it looks like so here are six different uh images they're each 224 by 224 image one is simply the ubuntu mug 2 and 3 are some guy image 4 is professor steven boyd image 5 is a car it happens to be a a mini traveler edition from the 1960s and the image 6 is a london bus for those of you who haven't yet figured out for my accent i am english and this particular car is actually exactly the same type of car my family drove about in when i was 10 years old and this is exactly the kind of london bus that i would have ridden on back in those days although the car that we had was blue not green um so let's take these images um and embed them under the vgg16 map and for each one of them we're going to get a 4096 long vector and what we've done here is uh com looked at it looked at those images in pairs and for each pair of images computed the distance in terms of the euclidean norm between the two corresponding embedded vectors so let's look at this embedding the matrix here is a matrix of the pairwise distances between the embedded vectors of each of the images so for example this number right here is the 2 3 entry of the matrix and so that's the distance between image 2 and image 3. both of which are images of me and so we expect them to be close if this is a faithful embedding if we compare that distance between with the distance between image two and image five well then suddenly we're looking at the a number of 109 compared to 63 which is substantially different and we can see that the two images of me are rather close to each other whereas image 2 is quite far from an image of a mini and same for the distance between an image of me and uh the image of the bus those are also quite far apart and the same is true for the other images of me image three to image five and image three to image six are also quite far apart uh and if we compare Stephen to the the bus and the car he's also quite distinct from a bus and a car however the distance between image 2 and image 4 we're both people those are both head shots we can see that those numbers are quite a lot smaller than the corresponding distances between humans and people if we compare image three and four that's the other image of me and Stephen they're also quite close well there's one more interesting pair that's comparing the vehicles so that's this number right here that's the distance between image five and image six so that's this distance right there and we can see that the two vehicles are considered much closer at 86 than a vehicle in any person which are all about a hundred so this image this embedding is faithful uh we can also look at the um the uh the images of the mug you can see that the images of the mug are all quite far from all the other images in terms of euclidean norm distance and so we can say well this is a reasonably faithful embedding it's it's it's surprisingly effective at figuring out that three faces are kind of similar to each other two vehicles are kind of similar to each other but faces are really different from vehicles and mugs are different from faces and mugs are different from vehicles notice here that we didn't give it data that it had been developed for we i picked these images from uh some other source none of these images are as far as i know on image on imagenet and so here we're repurposing uh a trained set of features and we're using effectively different images and that sort of idea where one takes a model that has been developed using a particular data set and then uses it for a different learning task is called transfer learning now we usually assume that an embedding is standardized that's not always the case but it is very often the case and certainly when we construct embeddings by hand we go out of our way to make sure they're standardized what that means is that they're centered around zero and they have an rms value around one and of course that means for a particular data set we take all of the records u in a data set and uh we look at the we take each of them and embed them under phi so we get a whole bunch of phi u's which is a bunch of vectors and that set of vectors should be centered around zero and that set of vectors should have an rms value around one um and so that means that roughly the entries of five u range between plus and minus 1 very roughly and with standard embeddings that means that if i've embedded each of these features phi 1 up to phi r here into to have this property that they're all centered around 0 and have rms value around 1 that means they're all comparable a large value of x1 which is phi 1 of u1 and a large value of x2 phi 2 of u2 will both be around 1 or 2 and a small value will both be around minus 1 or minus two um and it means that we can compare them we don't need to worry about what units the original data u1 and u2 was in or what magnitude they naturally have we've normalized and we might measure the distance between two embedded vectors using the rms and so we take the rms of the vector 5u minus phi of utilda that's a reasonable measure of how close records you and you today are that's again a consequence of the faithfulness of the embedding and so the way we ensure that all of our data has this property that is centered at zero and has uh values that range between plus and minus one you do what's called standardization it's also called z scoring um so suppose we've got real numbers in the the field type and then we've got u1 through un which are real numbers uh then what we might do is we might define u bar to be the average of the uis and stood u to be the standard deviation of the use and then in order to construct uh a vector or in this case a scalar x which is an embedding of u which is normalized we do what's called the z-score transformation where we take u subtract it from subtract from it u bar and then divide by the standard deviation and that ensures that first of all that the average of the x's is zero and the standard deviation of the x's is one so these are very easy to interpret all of a sudden if i've got uh z scored features and i look at x when i see an x which is 1.3 well that means that the corresponding u is one point three standard deviations above the mean now another transformation we might do be uh before z-scoring is taking the log now this is a very standard rule of thumb if you've got a field u which is a positive number but it ranges over a very wide scale then you embed it as phi of u is equal to log of u sometimes you'll use log of one plus u if u is allowed to be zero and then your standardized so for example if u is the amount of traffic that a particular website receives well then you've got a bunch of different websites and each one has a different amount of traffic and there is a huge difference between the amount of traffic that the big websites like amazon and google receive and then rather smaller websites such as my own website receives and these are many many orders of magnitude different and so uh with this kind of thing it's much better to instead of trying to uh look at numbers over such a wide range of scales to take the logs and then we might conclude that well a thousand and eleven hundred are very similar 20 and 22 are also very similar um but 20 and 120 are not similar if we were to measure those in absolute terms we might say well 20 and 120 are a hundred apart and a thousand and eleven hundred are a hundred apart and so we should consider them both equally similar and 20 and 22 are only two apart so they're very similar but actually if we look at it in a log scale we'll find that 20 and 22 and a thousand and 1100 are both equally different or equally similar and 20 and 120 are much further apart and because we're taking logs what we end up measuring is the relative difference between the values rather than the absolute difference between the values another way to say it is that 22 is only 10 more than 20. 1100 is only 10 more than a thousand and here's an example for house price prediction we want to predict uh the house selling price v we've got a record which in this case just has two fields u1 and u2 u1 is the area in square feet u2 is the number of bedrooms we care about relative errors in price so we're much more we can say if two house prices differ by ten percent we will consider them as similar um and uh that allows us to consider uh a million dollars and one point one million dollars is just as similar as a hundred thousand dollars and a hundred and ten thousand dollars um and so we'll embed the price v as using psi of v as log v and then we'll standardize we also standardize the records u in particular we stand aside standardize the fields u1 and u2 and we do that by subtracting the corresponding means and dividing by the standard deviations and the reason we do this is easy to see if we consider having two different records so we suppose i have a record u and a record u-tilde and then i'd like to say well how close are you in utila well let's just write that down well then u minus u tilde the 2 norm squared that's going to be equal to u 1 minus u 2 1 squared plus u 2 minus u tilde 2 squared and so the first term in this is the difference in the number of bedrooms of house you and house utilde and the second is the difference in the price of house you and house utilita and both of those quantities are squared and added up of course the trouble is is that even a very small change in price is going to swamp a change in number of bedrooms and yet a change in the number of bedrooms from three to four could be hugely significant much more so than a change in the price from 200 000 to 250 000. so standardizing puts these two different fields on equal footing within our embedded vector space so then we're going to predict why y is log v the log of the price from the standardized area in the standardized number of bedrooms we'll have a linear predictor which will be y hat is theta 1 plus theta 2 x 1 plus theta 2 times x 2 plus theta 3 times x 2. i'm sorry and so in terms of the original data we're going to uh so how does this work we start off with a a bunch of data which we use in order to determine theta 1 theta 2 and theta 3. and then we get given a new house that we've never seen before a new record and that gives us a u1 and a u2 a number of bedrooms and a square footage we take that number of bedrooms and number of square footage and we standardize them to get an x one and next two we then apply our predictor y hat is theta one plus theta two x one plus theta three x two to get a y hat which is a prediction of the log of the price and so our predicted price is constructed by inverting that embedding y is log v by to using v hat is the exponential of y so v hat is going to be x of theta 1 plus theta 2 times u 1 minus mu 1 on sigma 1 plus theta 3 times u 2 minus mu 2 and sigma 2. and this is very interpretable um so for example if here theta 2 is 0.7 what does that mean well it means that every time u1 increases by one standard deviation that means the price of the house is going to get multiplied by the exponential of 0.7 which is about 2 so every change in u1 which i think is area yeah every change in area by one standard deviation doubles the house price let's look at some more complicated embeddings where we take an individual field and embed it as a vector so we can embed a field into a vector xs5 view and now x here lives in rk that's the dimension in which that particular field is embedded and it's useful to have a k which is more than one even when the quantity that you're embedding the raw data you is just a raw it's just a real scalar we've already seen that with the polynomial embedding where phi of u is one u u squared up to u to the d and that's useful because it allows us to use a linear predictor to construct a predictor which is a non-linear function of u even though the predictor is a linear function of x and there are other things you can do you can similarly use other nonlinear functions you might embed it as 1 u minus u plus it's worth drawing what these functions are so let me try and do that so this is going to be u and then i'll plot the function use there and i will also plot the function 0 which just sits right there and then u subscript minus is the minimum of these two functions which means that there is u subscript minus and u subscript plus is the maximum of these two functions and so that means that this here is u plus so i've got two functions we think about u plus as being the positive part of u and u minus as being the negative part of u now what that means if i'm trying going to construct a linear predictor with these two features it means well let's write that down uh let us so i'm going to have a linear predictor which is a function of these two features i'm going to have y hat which is going to be theta 1 plus theta 2 over u plus plus theta 3 of u minus and uh so this is a constant this is a linear function of u on the positive part of when u is greater than zero and it's a constant when u is negative and this is a linear function of u for when u is negative and it's a constant when u is positive what that means is that my function y hat is a function of u i can plot it and it will look something like this it has a particular slope here a particular slope here and a particular intercept there and those three quantities are determined by theta one theta two and theta three in particular the intercept is theta one the slope of this part of the line is theta 2 and the slope of this part of the line is theta 3. and so i have a piecewise linear function now it's also worth being aware that when one talks about piecewise linear functions often one thinks about that meaning a function where the joint it is composed of linear segments and the joins between the linear segments may be anywhere and i may have any number of segments here we're choosing predictors which are piecewise linear functions but they're very specific piecewise linear functions they only have two linear pieces and the join has to be at the origin join has to be somewhere there this particular quantity the the origin in this case is called a not for this piecewise linear function that's a knot and you might have more complicated piecewise linear functions which you could construct using a different embedding so for example if my embedding was 1 u plus u minus and u minus 1 plus well then there'd be an additional not out here at 1 when i'd be able to pick a different slope and then slope would be determined by these additional parameters well i've got four parameters theta one theta two theta three and theta four it's worth noticing also that the slope of this line will not just be the coefficient in front of u minus 1 plus but it will in fact be the coefficient the sum of the coefficient of u minus 1 plus and the coefficient of u plus now then a particular type of data field is called categorical and what that means is it only takes a finite number of possible values in other words the set script u is a finite set and we've called the entries in that set alpha 1 up to alpha k those are called the category labels sometimes we'll use category labels 1 through k and we'll refer to category i sometimes will be explicit about alpha 1 to alpha k and refer to category alpha i depending on what works best at that particular notational point lots of things are categorical variables the most common is the boolean true or false there's only two possible values we might be trying to identify fruit in which case we might have apple orange and banana we might have a field which is the day of the week in which case there are seven possible values monday through sunday we might have zip codes that's 40 000 possible values roughly um you could just say well embed that as a real number but then we'd ask the question well is that faithful and we might conclude well there's no particular reason why two zip codes that are numerically close should be physically close sometimes they are and sometimes they are not uh countries of course they're not numerical at all and so there are maybe about 180 or so different countries that we might list in just a bunch of different categories uh languages there are several thousand different languages um at least those which are spoken by a large number of people now there's a particular way of embedding categoricals which is very common and very useful and that's called the one hot embedding so let's just refer to our categories now as one through k so that script u is has k positive it consists of the numbers one two k any the value of the field is either going to be 1 or 2 or up to k so the one hot embedding says if you see u is equal to i then embed that as e subscript i remember what e subscript i is so in a vector space e 1 is just the vector 1 0 0 e 2 is just the vector 0 1 0 0 e 3 is the vector 0 0 one zero and so on people call it the e i the eighth canonical basis vector in a vector space and so in a k dimensional vector space there are k different canonical basis vectors and so we can embed the categories one through k as the k different basis vectors in rk so in particular we might embed apple as one zero zero orange as zero one zero and banana as zero zero one we might embed true as one zero and false as zero one and this is a different embedding of the booleans into r2 this time as opposed to the minus one one embedding we saw earlier we might embed languages there's a bunch of different canonical basis vectors uh all the way up to 185 of them um and if we uh standardize these features well that allows us to handle unbalanced data because when we standardize these features they won't remain at one zero zero zero one zero zero zero one they will get shifted by the mean and scaled by the standard deviation and in particular the mean and the standard deviation of the data set will be affected by the proportions of the different categories that show up in the data set and that's a nice way of handling unbalanced data sets and we will see some of the problems that show up with unbalanced data sets later in the class there's also another way of doing uh one hot embedding which is called reduced one hot embedding and here the idea is is that instead of it mapping the categories 1 through k to k dimensional space we map them to k minus 1 dimensional space where we do this is we choose one of the categories say the last one category k as the default value the nominal value and we embed that at the origin all of the other categories 1 through k minus 1 we embed as the first k minus 1 canonical basis vectors e 1 to e k minus 1 as before if we do that for the booleans then we would embed true as 1 and false as 0 because here k is two there's only two categories and so r k minus one is simply the real numbers and uh so we're embedding booleans as one and zero and this is a very common embedding of the booleans there's another type of data a specific type of categorical data and that's categorical data that's ordered now apple pear banana are very much not ordered quantities there's no natural order to put the fruit in but very often you do have uh orderings so for example when you uh uh give something a rating on amazon and uh you're either gonna give it one two three four or five stars and yes there's only five possible categories that you can put it in but those definitely have an order such a scale where there's uh an order to the categories it is uh and it's used to indicate a preference is called a leicard scale l-i-k-e-r-t um yeah so for example uh the original leica scale was strongly disagree disagree neutral agree or strongly agree you know leica was actually a person it's not just a scale it measures how much you like something like was a psychologist an american psychologist who lived in michigan he lived for most of the 20th century the likert scale can be embedded into the real numbers with values minus two minus one zero one or two and it's nice it's a faithful embedding in the sense that it preserves the ordering of the different categories it's also quite reasonable to treat the lycra scale as a categorical with one heart embedding into our five um that way is perhaps a little less faithful but both ways are very common you can do the same thing with other categoricals as well for example the number of bedrooms in a house you could treat it as a real number or you could treat it as an ordinal with values between one and six and both would be completely reasonable so i want to talk right now about feature engineering this is the next stage after embedding so in embedding we're starting with raw data and we are putting that data into a vector space so that we can use mathematics to develop machine learning algorithms the next stage is to think about feature engineering and in feature engineering the basic idea is that we start with some features and then we process or transform them to make new engineered features and instead of just having a predictor which is dependent of on the original embedding we have a predictor which is in bit which is dependent on these new features uh we can do this in lots of different ways and we've seen some ways already so one example of this is where we used a polynomial embedding of a real raw data we embedded it as one u u squared u cubed and so on and you could either think about that as an embedding or as feature engineering where we've chosen features in a particular way in the hope that it will improve our predictor in that case what it does is it allows our linear predictors to become polynomial predictors now fundamentally the question you have to ask yourself is what feature engineering should you do does it improve the predictor um and there's an answer to this question that is worth repeating to yourself throughout this class and and that is is that the way you tell whether one predictor is better than another is by validating the performance so if you want to know whether these new features that you're producing using feature engineering are helpful are better than the original features then develop a predictor using the original features developer predictor using the new features in both cases validate the performance against unseen data if the performance with the new features is better well then your feature engineering is actually helpful and if it wasn't then you don't need to do that particular piece of feature engineering and look at something else the point is is that there is no answer to this question there is no answer to how one should embed a particular type of data for a particular type of problem the answer comes about through validation through measuring the performance against a data set in determining whether the performance is better with that sort of features after the fact sometimes you can come up with some post hoc explanation for why certain features were better you can say things like well obviously for numbers of bedrooms one should use uh one hot embedding rather than uh ordinal embedding into the reels but there's no truth to such post-hoc explanations those are just justifications for uh what happened with the outcome that happened to turn out that way the only test the only way of knowing whether one set of features is better than another is to validate there are several different types of feature transforms one is to modify particular features that you have so you've got a feature x i and you replace it with an with a transformed feature of x i new so you might standardize for example um another thing you can do is you can say well i've got one feature and i'm going to take that one feature and can create multiple features from it that's the power as example we take x i we replace it with x i x i squared x i cubed and so on another thing you can do is you can take more than one feature and combine them together to create new features so for example you can take two features say x1 and x2 and create a new feature which will be the product of x1 and x2 here's a feature that's very commonly used this is the gamma transform you take x and you replace it with sine of x multiplied by the absolute value of x to the power of some constant gamma and that produces curves that are shown in this plot right here if gamma is a half for example it produces a curve that looks like that that's a case where we might believe for example that a feature has diminishing importance as its magnitude grows and so we might replace it um we might replace x by the absolute value of x to the power of a half um again that's a justification but it doesn't necessarily mean that if a feature has diminishing importance then you should use the gamma transform it's something reasonable to try particularly if you're using a linear predictor but the only way to know whether it's better or not is to validate another thing you can do is clip this is also called saturation or winds arising there's a nice little formula for it here the new x is equal to x if x is between l and u or it's equal to u when x is greater than u or it's equal to l when x is less than l and that saturized saturates like this and so values larger than in this case than one i just treat it as if they're one values less than minus one i just treat it as if they're minus one we might believe that for example our data contains anomalies very large data values that are really a an artifact of the way the data was measured and we know that the physical quantity that's being measured can never exceed values of 1 or minus one in which case windsor rising would be a reasonable thing to do here's powers we've seen these already this we know that this is u u squared and u cubed and here's one we've seen already as well u plus and u minus these are splitting x into uh positive and negative parts there's an additional one here which is worth looking at this is the saturation function sat of x sat of x is the minimum of 1 the maximum of x and minus 1. we can plot that let's try and do that the maximum of x and minus 1 well this is the minus 1 function and this is the x function so the maximum of x and minus 1 is that function right there and then we take the minimum of that function and one and the one function is right here and so the minimum of those two functions is let me draw it right here that's the sat function it's the windsorization it's another way of expressing the windsorization or the saturation function that we saw over here another thing one can do is create new features for multiple features so we can model interactions among features so for example um you might create all products x i x j or you might create the maxima maximum of x i and x j you can also create all the monomials of degree 3 or less x1 x2 x1 squared x1 x2 and so on and that would give us an arbitrary polynomial of degree 3. products can be thought about as a nice way of modeling interactions and so can maxima so for example suppose x i are boolean so and we've embedded them to take values 0 or 1. they might represent say patient symptoms then we create interaction features which are the products of x i x j and of course we only need to consider the terms which i is less than j because x i x j is equal to x j x i so there are going to be if we have d original features we're going to add d times d minus one divided by two additional features then if we do linear regression say when d is 3 we're going to get an expression which looks like theta 1 x 1 plus theta 2 x 2 plus theta 3 x 3 plus theta 1 2 x 1 x 2 plus theta 1 3 x 1 x 3 plus theta 2 3 x 2 x 3 those would be that would be the form of a linear predictor with those new features now these are very nicely interpretable these coefficients because theta one is the amount the prediction goes up when x one is one theta two is the amount of production goes up when x three is one and c two one three is the amount the prediction goes up when both x1 and x3 are one in addition to c to one and theta three you get this additional boost in your prediction and so this is saying that when two symptoms both occur then we believe our predicted value of whatever it is we're predicting should have an additional amount added to it c to one three another way to say is that if theta 1 3 is large then if you've got both symptoms simultaneously present then we increase our estimate a lot another thing one can do in future engineering is quantization we specify bin boundaries b1 to bk and we partition the real numbers into buckets the interval from b1 to b2 from b2 to b3 and so on and then we replace x by the following we embed it as e one through e k plus 1 where the embedding returns e1 if x1 is less than equal to b1 it returns e2 if x1 is in the next bin between b1 and b2 and so on so x maps to e i if x is in bin i this is a one hot embedding of the ordinal that results from figuring out which bin x is in and you might say well why would you ever do that and the answer to that kind of question is always the same is that this is the kind of thing you try and see whether it works the only way to know whether or not it's better is to try it and validate sometimes you may believe that this the case where x is in these different bins those are really different situations and so the predictor should treat them differently and introducing these kinds of features is one way to develop a predictor that treats these different cases differently that would depend on what type of predictor you're using and uh so it's a it's an intuitive motivation rather than a rigorous motivation ultimately one validates to determine whether or not such a prediction such a piece of feature engineering is a good idea or not now you can do these many times you don't have to do embedding and then one round of feature engineering you can compose them with each other you start by embedding the original record u into a feature vector which we'll call x0 and then you transform x0 using a feature engineering transform to get x1 that might be standardized and then you repeat again you may do a polynomial embedding or you might do binning or you might do windsorization then you repeat this m times and you've got a composition of different feature maps and that gives you the final embedding this is called the feature engineering pipeline now there's one more topic that i want to mention in this section and that's automatic feature generation so the features that we've seen so far are generally done by hand using experience and that's hard to do essentially it's done when one's trying these kinds of things there are things that one knows is are generally a good idea standardization logarithms one hot embeddings and there are things that one has experience of in certain domains certain embeddings work well in certain domains products for symptoms for example is a very good thing to do um and then there is the rest of it which is largely trial and error one can try a bunch of different feature engineering tricks and hope that they improve your validation results and because it's trial and error it would be very nice to be able to develop feature mappings automatically directly from the data and this is certainly possible we've already seen two examples one is word to vec where words were mapped to vectors another is vgg16 where images were mapped to vectors and those were learned and they would do so they were developed automatically from very large data sets and those are very important methods they have made a huge difference to our ability to do effective machine learning and we will see later those methods in some detail and we'll talk about some specific methods for constructing these things pca is an example that stands for principle component analysis neural networks is another example so to summarize all features are mapped to vectors that's embedding it's the feature map we subsequently process those vectors in our predictor we may do extensive feature engineering to construct more and more complex ways of mapping features to vectors only validation can choose between different candidate feature maps and we're going to see later how feature mappings can be derived from data as opposed to by hand |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_13_erm_for_classifiers.txt | Welcome to the section on fitting classifiers by empirical risk minimization. [NOISE] Now, this section is very much parallel to our earlier section on regression by empirical risk minimization. We have to really discuss two things in order to do classification. And the first is, how do you embed categorical variables v? So we have a script v. It's a, it's a finite set of possible values of classes that, uh, our target variable v can take. And we wanna take the target variable v and embed it into a vector y in R_m. And we're gonna let y be Psi of v. And, um, of course, Psi of v only takes k values as well, because v only takes k values. [NOISE] And so we have, um, Psi- we have capital K vectors, Psi 1 through Psi capital K. Each of those are the images under the map Psi of the classes v_1 to v_k. [NOISE] And we call these vectors Psi_i the representative of v_i. And the set of all representatives Psi_1 through Psi_K is called the constellation of representatives. Now, here are some examples, um, we might, um, embed true to 1 and false to minus 1. We might embed true to 1 and false to 0. Uh, those are mappings from, er, uh, uh, the, uh, of two element set of categories into the real numbers. Uh, we might, uh, also map the three element set, yes, maybe, no, to the real numbers 1, 0, and minus 1. Um, another thing we might do is do a, a one-hot embedding where we have, say, three categories: apple, orange, and banana. And, um, we map apple to E_1, orange to E_2, and banana to E_3, but E_1, and E_2, and E_3 are the canonical unit variables- unit vectors in R_3. Um, now, we might also take three categories and map them directly to the reals. Um, if we bought, say, three horses: Horse 3, Horse 1, and Horse 2, we might map them just to 3, 1, and 2. Um, well, we might want to do really complicated things like Word2vec, which maps, er, a large number of words to, uh, vectors in R_300. [NOISE] So the one-hot embedding, um, takes K classes and maps into R_K. So a Psi of v_i is just e_i. Um, we could do it to Booleans if, um, [NOISE] we have, er, Booleans where the first representative maps to 1, 0 and the second representative maps to 0, 1. Um, there's another version of the one-hot embedding called the reduced one-hot embedding, um, which, er, maps one of the classes to 0 and the other K minus 1 class is mapped to the unit vectors in R_K minus 1. Um, so if we had Booleans, in that case, that reduces the mapping, er, er, [NOISE] ah, from the, the one-hot embedding is- maps the Booleans to 1, 0 and 0, 1. And the reduced one-hot embedding would map the Booleans to 0 and 1; one-dimensional embedding rather than a two-dimensional embedding. Um, we might also have, ah, yes, maybe no, and the default be maybe, in which case, uh, that's three categories we would map it to R_2 by the one-hot embedding- the reduced one-hot embedding. And we put maybe as the default. So maybe would map to the origin, yes might map to 1, 0, and no might map to 0, 1. Now, once we've, er, embedded, we are going to have to do something we didn't have to do, uh, in regression and that is unembed. Ah, and in particular, we've got to unembed, er, points which are not mapped to by our embedding map. And, um, so, ah, we will see how to do that. Um, [NOISE], uh, let's look at how this works from the, uh, overall, er, predictor point of view. Uh, so we start off, we embed our raw input u to a feature vector x by the map Phi, the feature vector x is now R_d. Ah, and we go to raw output, which is in our data set which is now a v, which is a categorical. We map it to Psi of v, which is y which lives in R_m. We use these data points we've got from n, u_i, v_i pairs to n, x_i, y_i pairs. And we use those to create a predictor G, which maps R_d to R_m. And for any x, it will give us a y hat, which lives in R_m. And- now what are we gonna do with y hat? Well, what we hope is that, uh, um, if y hat is G of x, then y is somehow close to Psi of v, where v is one of the categories. Um, and close here would usually mean in the norm. So I've got the, ah, the norm of, ah, y hat minus Psi of v is small for one of the Vs. And to get the prediction, we've got to un-embed. We've got to pick, ah, a v hat, which is, um, [NOISE] er, corresponding to the y hat. And we're gonna define a map Psi dagger, which takes vectors in R_m and maps them back into the target set script B, the unembedding function. So once we've got a Psi dagger, well, we can use it in order to construct our final classifier. Our final classifier has to take a u and give us back a v hat. And the way it's gonna work is we're gonna take u, pump it through our embedding Phi to give us an x, pump that through G to give us a prediction y hat, and then pump that through the unembedded Psi dagger to give us a v hat. And then we'll call that map from u to v hat capital G. Um, G is the composition of Psi dagger with G with Phi. In other words, we embed, we predict, we unembed. [NOISE] The usual un-embedding that people choose is the nearest neighbor un-embedding. And what it does is it says when you've gotta y hat, pick out all the categories in script v, the one, such that psi of b is that the closest to y hat. So minimize over all little v's in script v, the norm of y hat minus psi of v. And that is called psi dagger of y hat. And if it so happens that there are two vectors v, two classes for which their embedded points are equally close to y hat, well then you can break ties anyway you want to. Another way to say it is what we're doing is we're choosing the target variable, target category, um, which is associated with the nearest representative to y hat. So here's the- the simplest possible example. If we've embedded the Booleans true and false to 1 and minus 1, so those are our representatives, psi_1 and psi_2. Well, then we would un-embed by mapping any positive number y hat to true, and any negative number y hat to false. Because if we've got two representatives, which we have; minus 1 and 1, well, if y hat is positive, then the closest representative is 1, and if y hat is negative then the closest representative is minus 1. Um, suppose we've got one-hot. So let's look at the r2 case. Um, so here, we have y_1, y_2, two components of y. We have 1, 0 and 0, 1. This will be psi_1 and this will be psi_2. Those are the embeddings of our two categories in the case of the Booleans. So this would be psi of true and this would be psi of false. And then we develop a predictor, and that predictor gives us back at y hat, which is anywhere in the two-dimensional planes. And then what do we do? Well we will say, [BACKGROUND] if you're on this side of the plane, if one hat's over there, then corresponding v hat will be psi_1. And er, if you're on this side of the plane, then the corresponding v hat will be psi_2. And to be precise, there will be no psi_1 and psi_2 but v_1 and v_2, the corresponding targets true and false. Um, and so here, it's clear. What you do is you look at the two components of y hat, y hat_1, and y hat_2. And if y hat_1 is larger then you're on this side of the plane- you're on this side of the plane. And if, uh, y_2 is larger, well then you're on this side, um, of the boundary between the two, we- between the- which separates the region which belongs to psi_1 and the region that belongs to psi_2. Uh, let's put it another way. If, uh, y hat_1 is greater than y hat_2, um, er, then the arg mean over v of the norm of psi v minus y hat is, uh, is v_1. Otherwise, it's v_2. Uh, this is true more generally, um, uh, if we want to find out which is- what is the solution in general to this problem of unembedding, what is psi dagger when we've used to one-hot embedding, um, it turns out that this is exactly what we think it is. We look at the largest component of the vector y hat and, uh, we return the corresponding target vector. So if y hat_3 is the largest component of y hat, then unembedding psi dagger would return v_3. Um, and the reason for that, we can just work it out algebraically what it is. Um, if I look at what, um, uh, what y hat minus ei is, um, well, that's the two norm squared of that quantity, is the two norm of y hat plus 1, uh, minus 2 y hat transpose ei, and y hat transpose ei is just y hat_i. It's just the ith component of y hat. And so picking i to minimize that quantity is precisely the same as picking i to maximize y hat_i. Um, suppose we've embedded with the reduced one-hot embedding. We've used the reduced one-hot embedding to embed yes, maybe, no as 1,0, 0,0, or 0,1. Well, then we un-embed in exactly the same way for any point y hat in the r2 plane, we pick the closest representative. And that tells us which target variable we're predicting. So if y hat is over here in this region, it's- I'm going to return yes. Yes is psi dagger of any point in that region. No is psi dagger of any point in this region, and maybe a psi dagger of any point in the remaining region. And that's nicely expressed in the real cases construction there. And again, you can choose any value you like on the boundary. So more generally, we might, um, have, ah, an embedding of, ah, ah, categories into R_m. And, ah, that gives us a whole bunch of different representatives, here represented by these blue dots. And for any given point y hat in R_m, we want to find the co- closest representative. And the way we do that is by the Voronoi diagram that we saw earlier. And so in particular, ah, this region right here is the set of points for which the closest blue dot is- that one in the middle. And so that means that if y hat is in that region, I will return the corresponding representative to that blue dot right there. And as we've seen when we looked at the nearest neighbor classifier, the, ah, regions which, ah, correspond to each target variable, with each representative are polyhedra. [NOISE] Now suppose we are using a parameterized predictor. So g is a function of a parameter Theta, and g is-, ah, might be Theta transpose times x if it's a linear predictor or might be something more complicated. Well, then theta is the parameter we choose. And then once we've got a predictor g Theta, we can- we compose it with the embedding Psi and the un-embedding- the embedding- this should be the embedding Phi. Let's just see that we use that same notation here. This- that should be Phi. [NOISE] So this should say Psi Dagger g Theta Phi of u. Not Psi of u. And that's the classifier that gives us v hat. So what do we do? We choose Theta using ERM and the training data set, and we validate the predictor by looking at the performance metric, which might be, ah, the error rate on a test data set. Of course, the performance metric might be the Neyman-Pearson error measure as well. And so when we're doing a parameterized predictors for classification, we might use a tree-based predictor, and has, ah, got a name, it's very commonly used, it's called a classification tree. And then Theta encodes the tree, tells us when to split at each node, and the threshold, and leaf values, and initially, it has a corresponding value of y hat. And that leads immediately to the corresponding target because we can un-embed the y hat to the v hat. Or we might use a neural network, um, in which case, Theta gives us the bias or the offset and the weights in the different layers, and then y hat is the output- the last layer. And again, we are un-embed. We might use a linear predictor. Theta is a d by m matrix, and y hat is just Theta transpose times x. Now, the other major piece that we need a classification is a loss function. And, ah, it turns out that the loss function, ah, that we would use for classification is slightly different from the typical loss function we would use for regression. Ah, and we're gonna see in the next section, ah, several different loss functions, um, for classifiers. Ah, for the moment, I want to consider the simplest, ah, ah, loss function. And that's- um, ah, for example, the square loss. Now the way it works is just like it did in regression, is that we have a loss function. This is not quite right. It should be the loss function maps R_m cross R_m to R, and is a function of y hat and y. And it tells us how much prediction y hat bothers us when the true observed value of y is what it is. And it's going to be, of course, one of their representatives. Those are the only possible values for y because the only possible values for v are v_1 through v_k, and when we embed those, we get Psi 1 to psi K. As a result, we can think about loss functions slightly differently. Rather than thinking about a loss function as taking a y hat and y, we can say, "Well, really what we have is we have K different loss functions." I mean, the first one is the loss function, which tells us how much loss y hat causes us when the target variable is v_1, that's l of y-hat Psi 1, or when the target variable is v_2 and then we get a different loss. And we'll think of that as a totally different loss function. And we might even param- we might even use this kind of notation, ah, l_j of y hat to say, "Well, actually, we've got a different loss function for j is 1 up to capital K." And this loss function only depends on y hat because the dependency on y is taken care of by j. What it is, is it's how much we dislike predicting y hat in that particular case where the target variable is v_j, or correspondingly, when y is Psi j. And typically, what you want is you want that the loss function l of y hat Psi j, basing on negative number, and it's small when y hat is close to Psi j. So very commonly, we might use the square loss, um, which is the two norm of y hat minus Psi j. Um, and we'll see other loss functions which work much better for classification in the next section. [NOISE] So how do we think about two different loss functions when we're using the square loss in the Boolean case? Well, the translated versions of the quadratic function. This is on the- on the right here is l of y hat y when y is Psi, ah, 2. And Psi 2 is just 1, and this is l. On the left here, y-hat and y when y is Psi 1, which is just minus 1. And we can see that the one on the right is a quadratic centered at 1, and the one on the left is a quadratic centered at minus 1. So we're given, uh, a training data set; x_i, y_i pass of i 1 up to n. And we're gonna parameterize predictor g Theta. And empirical risk minimization simply says, well, what we do is we minimize the average loss function over Theta, where the loss is evaluated at y hat i and y_i, and y hat i is just our prediction evaluated at x_i. So ERM chooses Theta to minimize the loss. Regularized ERM chooses the Theta to minimize the empirical risk, plus Lambda times the regularization function. And Lambda here is a positive regularization hyper-parameter. And in most cases, we gonna need to do numerical optimization to find Theta, there won't be an explicit formula. Uh, and the least squares case is quite interesting. Let's consider what that is in the Boolean case. So in the Boolean case, let's have u be, uh, one dimensional, just to make it easy for me to draw. And let's be v, have v, be either true or false, and we'll embed true to 1. So this will be Psi of true, and this will be Psi of false. And at each of our data points, will give us a value of u and a corresponding value of, um, of v, which will embed to give us, uh, a 1 or a minus 1. So we might have a bunch of data points here, and a bunch of data points here, [NOISE] and that's our data points. And then what we're gonna do is we're gonna fit that with least squares. So we're gonna say, use linear predictor, y hat is Theta transpose x, and that will fit a straight line right there. Now, because we're un-embedding using Psi dagger. Psi dagger here, remember, is the inverse- is the, uh, un-embedding map corresponding to the one hot- embedding. And so it's going to say, well, whenever y hat is positive, your prediction should be true. And whenever y hat is negative, your prediction should be false. And if, uh, line of the- the plot of our predictor Theta transpose x happens to be like this, that means that whenever, uh, u is over to the right here, I predict true. And whenever u is over to the left here, I predict false. And if we look at our data, we can see, well, in my simple sketch there didn't seem to do too badly, um, although it's an astonishing idea that we can fit a function that only takes values 1 and minus 1 with a straight line. This is called a least squares classifier. We're using a square loss, we might be using a square regularizer, and we just solve the RERM problem. And this is the case where we can solve it exactly using least squares. As an example, in two-dimensions when we see it, it kinda works. And so here I've got u is in R_2. It's embedded in R_3 as 1, u_1, u_2. Remember, we've put the constant feature and we've embedded minus 1, 1 just directly as y is minus 1 or y is 1 as our two representatives; Psi 1 is minus 1, Psi 2 is 1. And we use a square loss and a square regularizer, and these are our predictor- this is our predictor. The shading here, shows which points u_1, u_2 map to, uh, 1, and which points map to minus 1. And of course, really what's going on is that this here, this boundary, that's the boundary where y hat is 0, and y hat is a linear function of u_1, u_2. Y hat, we know what it must look like. Y hat must be equal to Theta 1 plus Theta 2 u_1 plus Theta 3 u_2. And this line right here is the u_1s and u_2s, which result in y hat being 0. Over on this side, we have y hat positive, and over on this side we have y hat negative, and our un-embedding maps that to 1 and minus 1 to give us a classifier. Now if we have the Neyman-Pearson metric, so the Neyman-Pearson metric, remember what it is. It says we're going to consider a weighted sum of the different errors, where E_j is the rate of mistaking v_j for some other class. And Kappa here is going to be the weight. So Kappa J is how much we care about mistaking v_j, about predicting the answer in the true case where the true target variable is v_j wrong. Now what we do is we scale the losses by Kappa. So suppose we have loss functions, we've seen already the square loss, which we'll call l tilde y hat. Well, then l tilde y hat Psi j, for example, might be the square loss. It might be the square of y hat minus Psi j, or it might be the norm of y hat minus Psi j squared. Well, then we'll construct our loss function from these unweighted losses by taking a weighted combination of these losses. So now, will use a loss function, l of y Psi j, which is Kappa j times l tilde of y and Psi j. So let's look at, uh, an example. Suppose that Psi 1 is minus 1 and Psi 2 is 1, we gonna care about the Neyman-Pearson metric. And because this is Boolean, there are two types of errors we can make; the false negatives and the false positives. And our Neyman-Pearson metric is gonna be Kappa times the false negative rate plus the false positive rate. Kappa is some number posi- uh, greater than 0. Um, notice here that we've only got one scalar Kappa, uh, whereas on the previous slide we had the number of scalar Kappa is equal to number of categories, and so we really- this should have Kappa 1 times E_fn, plus Kappa 2 times E_fp. And here, of course, Kappa is simply Kappa 1 divided by Kappa 2, if you like, because, um, I can set one of the Kappas to 1 and all that matters is the ratio of the two Kappas in terms of affecting which prediction I get. Now, in our loss function, we're gonna use the square loss but we're going to scale it. And so we'll use y hat minus y squared when y is minus 1. And we use Kappa times y hat minus y squared when y is 1. And that will give us a weight which is greater when, uh, when the true y is 1, and so we want to make sure that we are penalizing here false negatives more than false positives. Here's an example. So square loss, sum of squares regularizer. Um, now, this on the left here is the ROC curves, and they correspond to minimizing the weighted sum Kappa E_fn plus E_fp. And these are the two losses, the training losses in blue and the test losses in red. On the right hand plot over here, we see the minimum error classifier, which is given by when Kappa is 1, and that would correspond to, uh, this point down here. Now, if we pick Kappa is 0.4, well, then we're very concerned to make sure that we don't mistake any red points for blue points, and we end up with a classifier that has moved the boundary in this direction. Conversely, if we pick Kappa is 4, well, then we're very concerned to make sure we don't mistake any blue points as red points, and so we moved the boundary in this direction. And by varying Kappa, we will vary our preference for what type of errors we're willing to tolerate. So let's summarize. A classifier is a predictor when the raw output is categorical. We have categories v_1 through v_K. When K is 2 it's called a Boolean classifier. When K is greater than 2 it's called a multi-class classifier. Now we might have various error rates summarized in a confusion matrix. Now, when we're fitting a classifier to training data by an ERM or a regularized ERM, we embed the raw output v into R_m using Psi. The vectors Psi 1 through Psi K are then the embeddings of our targets v_1 through v_K, which now use ERM to build a predictor for y. We un-embed to get y. Um, we- our predictor will give us a y hat, which we can un-embed to get, uh, a class prediction v hat. We do this by nearest neighbor un-embedding. Next section we will talk about special loss functions for classifiers. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_19_principal_components_analysis.txt | Welcome to the lecture on Principal Component Analysis. Principal component analysis is a- a type of unsupervised learning where we use a particular data model. And, uh, in order to do that, we need the following idea, which is the idea of distance to a subspace. So here, we're going to have a- a set of d-dimensional vectors, theta_1 to theta_r. And w- when we take all possible linear combinations of those vectors, we get a subspace. So in other words, I'm looking at combinations of the form, uh, x is equal to a_1 theta_1 plus a_2 theta_2 all the way up to a_d theta_d. And I'm allowed to pick the As any way I like. And if I look at all such possible vectors x I can construct, that's a subspace. And it's a d-dimensional, and it's a r-dimensional subspace of R_d. Um, and we might write that, uh, as theta_a, where theta is a matrix whose columns are the individual vectors, theta_i and a is a vector a_1, a_2, all the way up to a_r of the coefficients. And so the matrix theta times the vector a is equal to the linear combination of the columns of theta and the coefficients in that linear combination of the a Is. And so here, we have, uh, d by r matrix, and I guess I labeled that incorrectly, so let me fix that. That is theta_r not theta_d. So we have a d by r matrix theta, and that is the matrix that describes our subspace faster. It defines the subspace. And we can pick any point in the subspace by picking the vector a. Now if we wanted to say I've got another, uh, vector x, and I'd like to figure out how far is it from the subspace, let's draw a little picture for that. So here is my subspace, here is my vector x. And this distance here is what I'd like to know. Um, then, well, what is that? Well, every point in my subspace here, S has the form theta times a, and so out of all such points, theta times a, I want to fi- find the one that's closest to x, and that's this optimization problem right here. Minimize over a x minus theta_a. And the norm of x minus theta_a, is the distance, um, between x and a particular point theta a within this subspace. Now this is, uh, if I square that objective function, so I look at this distance squared while minimizing the distance and minimizing the distance squared are the same problem, and so, uh, this becomes the least-squares problem. And we know what the solution is to least-squares problems. It gives the optimal a as theta dagger times x. And theta dagger is theta transpose, theta inverse, theta transpose, that matrix. And this works if theta has linearly independent columns, in which case theta transpose theta is invertible and theta dagger is theta transpose, theta inverse, theta transpose. So that tells us what a is, and once we know what a is, well, then we know theta_a is this point right here and which is the, uh, the best, the closest point in the subspace to my given point x over here. Um, and we might call this thing x-hat. Um, that's the closest point in S to x. We will call it the projection of x onto S. x-hat is theta times a, which is just theta times theta transpose theta inverse times theta times x. So here, we've simply substituted in for a. Um, and I don't know what that transpose is doing there, um, because that transpose- this should actually read x-hat is equal to theta theta transpose theta inverse theta transpose x. Somehow that transpose got misplaced. And so if I want to know what the distance is between x and the subspace, well, that's x minus x-hat. The norm of x minus x-hat. And x-hat is this thing here, so substituting x-hat in gives me this rather unpleasant expression. But really, if you look at that unpleasant expression, it's really just- uh, let's call this matrix W. It's the norm of W times x and W is this matrix. Identity minus theta theta transpose theta inverse theta transpose. Yeah. So here's the picture I just drew. This is the case when r, of course is 1, when we've got a one-dimensional subspace, that's s. Um, and then the subspace is a line, there's a theta is a- a matrix which has only one column, which you might as well think about as a vector. But it certainly makes sense and the, uh, uh, this, uh, theta transpose theta inverse theta transpose gives us an a, which is a scalar. And then if I multiply it by theta, I get x-hat. And that's, uh, uh, this- this thing here, in this case it will be a 2 by 2 matrix because theta is a 2 by 1 vector. So what's our data model? Now we've got the idea of distance. Um, the data model is that x should be near to a linear combination of the vectors theta_1 to theta_r and R_d. In other words, it's near to a subspace. The parameter that defines the subspace is this matrix theta, which is a d by R matrix. So we believe that there is a subspace and for such a subspace, uh, x, all of the data points x more close to it. Um, now there are quantity R, the number, the dimension of the subspace is also called the rank of the model. Um, and the vectors theta_1 through theta_R are called the principal components or the archetypes. So this is called Principal Component Analysis. This is called a PCA model. But some people also call it a low rank model, and our loss function or implausibility is simply the distance between x and S squared. And it will be convenient to assume that Theta has orthonormal columns. Orthonormal columns means that Theta transpose Theta is the identity. Of course any subspace that, um, I want to describe, I can describe it by a set of vectors and I can choose the- uh, the vectors that I'm using to describe this subspace to be orthonormal. And- uh, and so there's no loss of generality in- in making this assumption. If you have a Theta that doesn't have orthonormal columns, you can compute the orthonormal- uh, set of orthonormal columns which, um, defines the same subspace by the QR factorization of the matrix Theta. So you take the QR factorization of the matrix Theta and you get rid of Theta and you replace it with Q. And then the, uh, set of linear combinations of the columns of Q is exactly the same subspace, as the set of linear combinations of the columns of the original Theta, but the columns of Q are orthonormal. So when Theta, Theta transpose is the identity, the loss function is the distance between x and S squared, which has this nice matrix expression. It's the norm of WX squared where the matrix W is identity minus Theta, Theta transpose Theta inverse Theta transpose. Because Theta transpose Theta is the identity, that's just identity minus Theta, Theta transpose. Now computing this norm times x, well, we have that's equal to the norm of x minus Theta, Theta transpose x norm squared, which is x norm squared plus the norm of Theta, Theta transpose x norm squared minus twice the cross term. We can simplify that. Um, in particular, uh, the- the norm of Theta times any vector v squared is v transpose Theta transpose Theta v, which is just equal to the norm of v squared. And so uh, that tells us that this term here is equal to the norm of Theta transpose x squared because it's Theta times Theta transpose x. And, uh, as we've just seen, norm of Theta v is the norm of v. And this here is minus 2, the norm of Theta transpose x squared because it's x transpose Theta, Theta transpose x. Um, and those nicely add up, uh, to give us minus norm of Theta transpose x squared, which is a very, uh, convenient expression right here for the loss function. So now the empirical risk, that's the average on the loss function. The average of the loss function on the dataset, simply 1 over n sum over i is 1 to n and the distance from x_i to S squared. And the PCA data model says we should choose orthonormal Theta_1 through Theta_r to minimize this empirical risk. So we get a bunch of x's, a bunch of data. We then, uh, pick the Theta that minimizes the empirical risk and then we can use that model, for example, to do imputation. So this is the case when r is 1. It has a nice interpretation geometrically, we have all these data points shown in green. Here, we're trying to choose a one-dimensional subspace, which is just a line. And the loss function is the distance between the point and the line, these red distances here. And the sum of the squared- sum of the squares of those distances that's the- uh, er, that's the empirical risk scaled by n. So we're trying to find the subspace that best fits the data. Um, and we're measuring the quality of fit by the normal distance between a point and a subspace, in this case a point and a line. Uh, we can, er, look at this in matrix notation. So let's express the empirical risk in matrix notation. As before, we construct the data matrix through the x's. This is just the same one data matrix we used in regression. It's an by d matrix, each row of which is a corresponding, uh, data element x_i. So the ith row of the matrix X is X, is little x_i transpose. So the empirical PCA loss is then expressible in terms of the matrix by noticing that the sum of the norms of x_i squared is the Frobenius norm of the matrix x squared. And that if I want to compute Theta transpose times x_i, well, I can compute the matrix x times Theta, and that will give me a matrix whose, uh, ith, uh, row is precisely Theta transpose x_i, or, uh, transposed. Notice that here, this should be a factor of n there. Now in order to fit the PCA model, we start off with, uh, n data points x_1 through x_n, and we want to minimize the empirical risk. We've got the additional constraint that we want to find the Theta that minimizes the empirical risk, but also should satisfy Theta transpose Theta is equal to the identity. In other words, the columns of the matrix Theta should be orthonormal. Uh, because the empirical risk is the norm of X minus the norm of X Theta. Minimizing this quantity over Theta is the same as simply maximizing minus- is simply maximizing the norm of X Theta_F squared. Um, and of course, the minus change the minimum- changes the minimum to a maximum. And it turns out that there's- there's an exact algorithm for doing this. Um, so it's not a heuristic. Um, er, uh, these algorithms are called a singular value decomposition or the eigenvalue decomposition. Uh, in this class, we're not going to go into the details of how those work, uh, but it's worth knowing that they exist, that their complexity takes is of the order of nd squared, where d is the dimension of X and n is the number of data points. Uh, and there are, uh, uh, more efficient methods when r is much smaller than d. What do we do when we've got such a data model? One thing we can do is imputation and the idea is straightforward. We fitted a subspace, here it is. We've got, uh, a data vector with some missing entries, so suppose for example we only know x_2, then we know [NOISE] that the true x lies somewhere on that line, and we pick this point right here to fill in the remaining x_1 entry, and that corresponds to exactly what it corresponded to. In our previous session on imputation, we minimize the loss function over the unknown components of x with the components of x that are known fixed. Now in order to find the, uh, intersection point right here, the way we do that is we have to find the a. Remember a parameterizes the subspace, so we have to find the a corresponding to that intersection point, that corresponds to minimizing this objective function. So here, we're looking for the uh, uh, the point a that minimizes the norm of the difference between x minus c to a. That's the point in the subspace, uh, which is as close as possible to x. However we don't use the entire norm because some components of x are unknown, and as a result, we simply use only those entries of x which are known and that gives us the sum over all i in the known set of x_i minus Theta a_i squared, and that finds exactly this point right here, and if we look at the corresponding x-hat which is Theta a, then we've got get the ith component of it that will give us the missing entries of x-hat. Now when we're fitting our model, we start off with data points X_1 to X_n and we would like to choose the best Theta so that we're minimizing the empirical risk. Um, and the way we do that is we, uh, look at the distance between X and S, where we've parameterized the subspaces S by Theta. In order to compute that distance, in turn we have to compute the optimal a. The optimal a tells us which point in the subspace is the closest to X, and there's a different a for each of the different Xs. The computation that goes into that is this computation right here. We find the a that minimizes x_i minus Theta a, and that we know is a_i is Theta transpose Theta inverse, Theta transpose x_i, and if we are restricting ourselves to only looking at Theta with orthonormal columns, then that's just Theta transpose x_i, because Theta transpose Theta is the identity, and we can write this in a convenient way. We can have a matrix A, [NOISE] where the ith row is the corresponding ith a transposed, [NOISE] and so now the matrix A will be an n by r matrix, and that A is going to be equal to x Theta. Uh, let me erase that and write it correctly, [NOISE] and that just says A is equal to x Theta, simply says equivalently that a transpose is equal to Theta transpose X transpose, [NOISE] and therefore that the ith row of A, a_i is equal to Theta transpose x_i. It's simply writing this as a matrix equation. Now, uh, once we've got the a, we can correspondingly work out the x tilde,- the corresponding closest point. There's our subspace, it's our point. There's the closest point. This is x_i, and this is x-tilde_i, and x-tilde_i, well that's what a_i tells us. X-tilde_i is just, um, Theta times a_i. Now we can write that as a matrix equation as well in the same way we'll write x-tilde, having its components x-tilde_1 transpose up to x-tilde_n transpose the n rows of x-tilde, and then the equation- this equation here corresponds exactly to the matrix equation x-tilde is equal to A Theta transpose. So our empirical risk is the average of the distance between x_i and x-tilde_i. If we forget about this- the factor of 1i_n, then that becomes the Frobenius norm of x minus x-tilde squared, each row of x being the corresponding x_i and each row of x-tilde being their corresponding x-tilde_i, and so the Frobenius norm just gives us the sum of the squares of the differences in the norms of the vector paths. We could write this like this. Since x-tilde is, uh, uh, is A Theta transpose and A is x Theta, and just substituting those two in gives us x-tilde is x Theta Theta transpose. Another thing we could do is we can simply say x-tilde is A Theta transpose, and so let's look at this expression directly, and that says that, well we- we start off with x, and our job is to find both in A and a Theta. Of course, once you know Theta, well then A is given to you. A is Theta transpose x. Conversely, um, once you know A, well then x-tilde is given to you, but here, we're simply saying, "Let's try and find simultaneously both A and Theta transpose." And the ex- the idea here is that this is a matrix factorization. We want to find a matrix A which has to have the dimensions n by r and the matrix Theta transpose which has to have di- dimensions r by d. I guess this should say d here, so that x is approximately A Theta transpose, and the dimensions look like this. The dimensions are, there is A and that has to be approximately equal. Sorry, that's x, and it has to be approximately equal to A Theta transpose, where this is n and this is d. This is therefore n and r, and this is therefore r and d. Now in general, if r is small, one will not be able to find exactly a pair of matrices A and Theta transpose such that x is A times Theta transpose, um, and so this is an approximate matrix factorization problem, and what PCA is doing is finding the closest matrix 2x that is a product of an n by r and an r by d matrix. Now, the mapping a is Theta transpose can be thought of as an embedding. It's taking an x and giving a vector a. The vector x has dimension d, and the vector a has dimension r. So this is an embedding which is taking x in a which is- may have a large dimension and giving us ah, an a, which would normally have a much smaller dimension r. Would normally be much smaller than d. And so it's a dimension reduction. We can think about a as a compressed feature vector when x is the original feature vector. Um, now, when we've done feature engineering in the past, we've done things like constructing products and applying non-linear maps to x to construct features. But here, the embedding is based on the data set, the embedding is being learned. And so this is a learned linear embedding from the d dimensional space of x's to the r dimensional space of a's. Now one of the nice things about this embedding is that it- approximately preserves distances. So that points that are far apart in our original d dimensional space, are also far apart in our r dimensional space. And points that are close, are also close. But because r is less than d, it cannot do that exactly. And so it does that as well as it can. Let's have a look at this. So this property that the distances are almost preserved is called the approximate isometry property of PCA. And isometry, say a map from R to the p to R to the q, a map is called an isometry if it preserves distances, which means that if I've got two, ah, I've got two vectors, x and x tilde. I map them both, both under F. I get F of x and F of x tilde. And the distance between F of x and F of x tilde, is approximately the same as the distance between x and x tilde. The most well-known simple example of this is F of x is Q times x, where Q is a matrix which is orthonormal columns, Q transpose Q is the identity. Then, ah, ah, we can see that norm of Q- [NOISE] of Qx minus Qx tilde squared. Well that's equal to Q times the norm of x minus x tilde squared, which is equal to x minus x tilde transpose Q transpose Q, x minus x tilde. And Q transpose Q is the identity, since this is equal to the norm of x minus x tilde squared. Now, what we're doing in PCA, our loss function is the norm of x squared minus the norm of Theta transpose x squared. And the-the, ah, the Theta transpose x, well that's precisely a. So this is the norm of x squared minus the norm of a squared. Once you know x, well then you know a. And so what we're doing is we're trying to make norm of x squared approximately equal to the norm of a squared. In other words, this means that the embedding is going to be nice as an approximate isometry. The smaller we can make the loss function, the closer to an isometry it'll be. In particular, we'll see that ah, ah, often we choose r to be 2 or 3 simply so that we can visualize the data effectively, we'll be able to, by picking r as 2, we can plot all of our data points in the plane. Each xi gets mapped to an a i. And these a i are two-dimensional vectors. And often by doing that, the PCA embedding picks for us and shows for us a map of our data in two dimensions. Where points that are close are similar, and points that are far apart are dissimilar. We'll see an example of that. So the example we're going to look at is called Latent semantic indexing. The idea here is that we have a corpus. A body of documents, a sele- a collection of documents. Each of our records ui, will be a document. In that corpus of documents we can look at all the words and count up the number of different words that are in there, and we'll call that number d, the number of unique words in all the documents. Now, whenever we've got ah, ah, d we can number all of the words that like show up anywhere in our ah, corpus of documents from 1 through d. And that means- that gives us a very natural way of embedding documents. Um, we can look at a document, um, and ah, we can set x1 to be the number of times word 1 occurs in that document. x2 to be the number of times where 2 occurs in the document and so on. So we'll have a histogram of the word occurrences for each document. And that's a very reasonable embedding um, ah, from ah, which maps a document to a d-dimensional vector. Ah, in fact, we don't tend to use that particular embedding, but we use a very similar embedding. An embedding which has the same idea where x- the jth component of x is approximately equal to the number of times the jth unique word occurs in that document. And is larger the more often the word occurs, but it's not quite that. Um, and let's see why not. Now, ah, we're going to use two- in order to construct this particular embedding, we're going to use two quantities. The first is called the term frequency of word j. And so that- what that is, is that if you give me a particular document, I count up the number of occurrences of word j in that document, and I divide it by the total number of words in that document. Um, so that tells me what fraction of words in the document are word j. That's called a term frequency of word j in document u. Now I can also look at a different quantity, the document frequency of word j. And that is, if I look at the entire er, corpus of documents, and I look at the number of documents in which that word occurs, and I divide by the now total number of documents I have. Um, now, why is this a good idea? Er, if we just kept a histogram of the term frequencies, that would certainly make a vector which would have large entries to words that show up a lot in that document. And small entries for words that don't show up a lot. Trouble is is that some of the large entries would be kind of meaningless. They'd be words like the, if, and, but. And ah, those words are words show up a lot in all documents. And so as a result, we would like to scale in some way, to de-emphasize words which are popular words in all the documents. And that's what we do with the document frequency. And so this thing is called a TFIDF embedding, the Term Frequency Inverse Document Frequency embedding. And it's not quite the ratio of the term frequency to the document frequency, but it's the term frequency multiplied by the log of one over the document frequency. And this kind of does the following. If a word j occurs very often in a document, and it doesn't occur very often in all documents, then the TFIDF embedding will be large. And here where ah, um, have a way of discounting the occurrence of very common words such as the. [NOISE] So let's look at a specific example. Uh, here we're going to have two texts, The Critique of Pure Reason by Immanuel Kant and The Problems of Philosophy by Bertrand Russell. Uh, these are both very famous philosophical works, uh, both in a specific area of philosophy and they're famous, uh, in part because they take opposite- opposing views. Um, and these we got- the way we've analyzed these is we've taken 50 excerpts from each of these books. Each excerpt has about 3,000 characters. Uh, for each, excerpt we've taken- we split them into words, removed any punctuation or capitalization, and it gives us uh, a total of the whole corpus of 3,566 unique words. We embed these using TFIDF embedding, we standardize and we apply PCA. So here, uh, is an example of 1,000 characters of Kant. Now, you can read this and get the sense of what it sounds like, and 1,000 characters of Russell. Now, um, just based on 1,000 characters, of course, you can't glean too much from the meaning. And the question is, can we learn something that would enable us to uh, distinguish these two documents? And so here, this is the embedding that PCA comes up with. We've picked r as 2. We've got a data matrix X, which is 100 by 2,262. Um, and, ah, this is, uh, 100 different excerpts, 50 from Russell, 50 from Kant. Now, each data point, each vector x i gets mapped to an ai. That ai of course is just Theta transpose times x i. And because we picked r as 2, each document corresponds to a vector in the plane, and so we can plot all our documents. And here they are, colored the blue ones corresponding to Kant, the red ones correspond to Russell. And you can see they're quite well separated. And that's one of the things PCA does for us, is it shows us the structure. It's given us an embedding where it has tried to spread out the features as much as possible. Now, we have, uh, a Theta, which is, uh, uh, uh, a vector, uh, which is a matrix, which is, um, uh, d by 2, as two columns, Theta 1 and theta 2. Um, and the way we should think about that, is that these are our basis vectors for our subspace S, each entry of theta 1 corresponds to a particular word, same for each entry of Theta 2. And so, here we can take all of our words and plot for each word. Theta 1 and theta 2, the corresponding entries of the Theta 1 i and Theta 2 i, the corresponding entries are Theta 1 and Theta 2 for word i. Now what does this mean? For example, let's pick a word. Let's pick this word down here, which is representation. Now, the word representation here has a Theta 2, which is negative, and a Theta 1 which is quite small, close to 0. And it means that if we have a document in which the word representation shows up a significant number of times, then the corresponding imbedded vector a, will have an a2, which is shifted negative by those words, and an a1 that isn't really changed. Similarly, if I look at a word over here, this word is about, that's got a positive Theta 2, and a negative Theta 1. And as a result, if the word about occurs significantly in a particular document, then the corresponding embedded a will be moved. So here's the origin, will be moved in this direction. So now, we can look back at our texts. And we can say that because Kant's documents are down here, and Russell's documents are up here, that suggest that, the words- these words over here, are words that Kant tends to use, and these words over here are words that Russell tends to use. Uh, let's uh, look at some of these. I happen to know that there's transcendental over here and conception over here. If we look at, uh, Kant, there, we can see in the middle transcendental conception, right there. And so, the reason why these documents split up like this, and then we can distinguish Russell from Kant, is they tend to use different words with different frequencies. And that shows up and is detected by PCA. Now, of course, it doesn't always happen that a two-dimensional embedding will split your documents so nicely like this. But nonetheless, this gives us a great choice for an embedding which will enable methods such as classification or regression, which we learned earlier in the class to do a good job. And uh, we're seeing that here. Uh, we may want or need to use an R as higher dimension, which means we won't be able to visualize it, but it will still make our classification and our regression methods work well. [NOISE] Now, I think that brings us to the end of the class. Officially, we have one more lecture scheduled for next week, but I think there is no need to try to fit one more topic in- in one lecture. And so we're going to stop here. I know it's been uh, a very challenging quarter uh, and, uh, I appreciate you all working hard on this class during such difficult times. Uh, despite the continuing challenges we are facing in spring 2020, I hope you've had uh, a productive, uh, quarter, and I hope this class has gone well for you, and I wish you all the best for the summer. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_18_unsupervised_learning.txt | Hello and welcome to this section on unsupervised learning. So this is really a sudden shift in topic. We're moving on to a new section within the class. So far, everything we've talked about in the class has been supervised learning, and now we're gong to start talking about unsupervised learning. And the idea in supervised learning is that we have pairs of records, u and v, and we want to learn a model which predicts v given u. Um, and it's called supervised learning because the- the vi's are giving us information about what the right answer is in particular cases, corresponding to the ui's. And this is supervising, is the idea the learning of the model. In unsupervised learning, it's different. We only have records u, and our goal is to build a model of the u's. Um, and so we'd like to be able to do things such as reveal the structure of the set of possible u's. Um, we'd like to be able to deal with missing entries in the u's, and figure out what they are. That's called imputation. We'd like to be able to detect anomalies, unusual cases which we've not seen before. And the idea of- of revealing structure or detecting anomalies are first kind of vague at this point, and we'll make them a little bit more precise. And imputing missing entries, well, we'll see exactly how to do that. So just as before, we work with embedded data, we take our data u and we embed it into a feature vector x. X is Phi of u and x is some vector that lives in R_d. And then we build our data model for the vectors x. Then when we need to, we unembed to go back to the raw vector u. So from now on in this section, we're going to work with the feature vector x. So we'll have embedded dataset x_1 through x_n, each of these is a vector in R_d. Now, the way we construct a model for this dataset is via a loss function. Um, we'd like to have something which characterizes the elements of the dataset which tells us what elements of the dataset look like or should look like. And I'm going to do this with a loss function, which we could also call in this case an implausibility function. It's a function on R_d, which is our space of possible x's, real valued, and it tells us how implausible x is as a data point. So if l of x is small, then x looks like our data. It's typical. And if l of x is large, then x does not look like the data. Now, the model might be probabilistic. So x might correspond to a probability distribution or a probability density, p of x, and then we would take l of x's to be the negative log of p of x, which is the negative log probability density. Uh, we might think of that as the negative log likelihood of x. Um, other names for l of x we might talk about surprise or perplexity, um, and l is often parameterized by a vector or Theta, and it might be a matrix Theta. And so we'll put a subscript Theta on l, l sub Theta of x. Uh, let's look at the simplest example. Suppose, um, our data model is x is near a fixed vector Theta. So the model here is parametrized by the vector Theta, and now we're going to try and learn that from the data. Um, and you might have implausibility functions associated with this. For example, you might have the square loss, which would be the sum of the squares of the 2-norm of x minus Theta. Sorry, that's correct. Or the 1-norm of x minus Theta, which is the sum of the absolute values of x_i minus Theta_i. Uh, a different data model is the K-means data model. Here, the idea is, is that rather than just having one ideal point, one representative, we have K of them, Theta 1 to Theta K, all vectors in R_d. And we believe that our data is close to one of these representatives. Um, now we measure how implausible a data point x is by saying, what's the distance between x and the closest representative? Which is just the minimum from i is 1 to k of the norm of x minus Theta_i squared. If it's the distance squared that we're using as our distance measure, we might equally well use, uh, uh, a 1-norm or a different norm. Normally for K-means, K-means specifically means the squared distance, the squared 2-norm distance. Then the model is parameterized by these, uh, uh, k d-dimensional vectors, which we could equally well view as a d by k matrix, which we'll call Theta, whose columns are Theta 1 to Theta K. Now, it's worth looking at what the role of the loss function is in supervised versus unsupervised learning. Uh, in supervised learning, the loss function is used to choose a particular predictor from a family of predictors parameterized by Theta. And once we've chosen the predictor, we no longer really care about the loss function. Uh, the predictor itself is our model of how x and y are related. In unsupervised learning, the- the loss function plays a slightly different role because it characterizes what the data looks like. Um, but the loss function actually is the data model. Getting- getting the loss function, that's- that's the primary goal of unsupervised learning. So using this loss function enables us to build, for example, an anomaly detector. Um, and that's a- a particular way of using a data model in order to identify anomalies, suspicious feature vectors, feature vectors that aren't consistent with the other feature vectors that we've seen. Uh, for example, the common application is, uh, let's say network traffic monitoring. We might have feature vectors that includes statistics of the size and distribution of packets. And what we do is we fit our data model. We have a loss function l parameterized by Theta. We fit that by choosing the Theta, and then we'll say that, if we look at all of our data, x_1 through x_n, we can look at the corresponding distribution of the loss function, and we can say, let's find the percentile value, say the 99th percentile value of that loss function. So that 99% of the values that we've observed of l of x are less than or equal to t. And then when we're getting new data coming in, as we're watching the network, if we- every time we get an x, we evaluate the loss function on it, we'll flag it as anomalous if the loss function is greater than our threshold t, is greater than that 99 percent- percentile value. Um, and this is, uh, an anomaly detector. We can also use our data model to impute missing entries. So we'll suppose that x has some entries missing. We might label them with question marks or NA or NaN for not available or for not a number, um, within the dataset. And, uh, we'd like to fill in those missing data entries. So, uh, uh, for any given vector x, we've got a subset of the numbers 1 through d, which is the set of known entries. We'll call that set script k for that data element x. And, um, uh, we're going to replace x with x hat. And x hat is going to be such that, well, its value on the known entries, on if i is one of the known- is one of the known entries, then we'll have x hat_i equal to x_i. If i is not one of the known entries, well then we're going to have to fill it in and we're going to have x hat _i is a question mark that we're going to have to replace. So on the slide here we have an example where, um, uh, x here has, um, a dimension 4 and k is 1, 3, which means we know the first and the third entry of x. And our job is going to be to fill in these unknown entries. Here we filled them in with minus 1.5 and 3.4. And this quantity on the right is x hat. And in the first and the third entry of x hat, are forced to be equal to the first and the third entries of x because we know the first and the third entries. But our job is to fill in the others. And of course k's might be different for each of our different, uh, data records. So for each i we might have a different k. So, uh, an example of this is for say, a recommendation system. Um, uh, the features here are different movies and, uh, the examples are customer ratings. So we will have D, different movies and N, different customers, and each customer has filled in ratings for some of the movies. So if D could be very large, if we look at the Netflix catalog and many, many thousands of different possible movies. And so x, say is a vector in R, 10,000. But for each particular customer, who may have only watched a few of the movies. And so most of the entries will be question marks, and then some of the entries will be rated. Maybe they would be rated on a Likert scale from 1-5. Um, and then our job is to impute the entries, the missing entries. And that tells us what rating the customer would give if they rated that particular movie. And then we're going to use this information to actually make a recommendation. We will look for each customer at the imputed values and find, um, those imputed values which are large. So movies that they would have rated large if only they'd rated it. And we'll send a recommendation to them saying, "You might like this movie." And this is exactly, um, what is called the- the Netflix challenge where, uh, uh, few years ago now there was a prize offered by Netflix for- to- to people to build a recommendation system, uh, which could impute ratings in this way. Another application would be to fit in missing features for supervised learning. So you're trying to do supervised learning, you're trying to do classification or regression. And your x's have some missing features. Um, and so far what we've done is remove the records that have fea- missing features. Um, al- all of the methods we've seen so far, have required us to have every, uh, data record i, have an x_i which is complete. It can't have any question marks for NA's in that. Um, and sometimes you do lose a substantial fraction of the data. Um, so, uh, uh, so for example, uh, some of the datasets we looked at earlier, for example, we looked at an Australian weather dataset. Uh, we lost a substantial fraction of the data by eliminating those, uh, those records which were just missing one element. Um, and so an alternative approach would be to use into- imputation to fill in the missing feature entries. And then use the filled-in dataset to do supervised learning. Another thing you can use imputation for is to detect anomalous entries. So here we're not trying to detect anomalous records, but anomalous entries. So particular components of, uh, a particular x. So what do we do? Well, for each i, we pretend that the x_i can't be- the ith component of x is question mark is unknown. And we impute to find x hat of i. We impute based on all the other entries of x. And if x_i, and x hat_i are very different, well, then we'll flag x_i as anomalous. Um, in the case of our movies example, we've identified movies which we would expect based on all the other movies that the person has rated, that they would not have liked, but actually they gave it a high rating or vice versa. [BACKGROUND] Um, now, the distinction between supervised learning and unsupervised learning is not so great. Um, in particular, one can view supervised learning as a special case of imputation. At least one can formulate supervised learning, as a special case of imputation. Um, so suppose, we wanna predict y based on x. Well, and we have this training data of x_1, x_n, y_1, y_n. We will construct a new set of records consisting of d plus m dimensional vectors, x tilde. Each x tilde is simply x stacked up on top of y. Um, now, we build a data model for x tilde using this training data and then we impute the last n entries of x tilde. And it so happens that every record is missing the last n entries of x tilde. Now, in order to use a data model to perform imputation, we look at our vector x_i and we say, well, we don't know what some of the entries are, but we know what the other entries are. So the entries that we don't know, we're going to choose by picking them such that the loss function, l, is minimum. So specifically, we say, well, I've got a loss function, it's a function of x. We're going to fix the components that I know and I'm gonna allow myself to vary the components that I don't know. And we minimize the implausibility, which is just the loss function. This is a very natural thing to do, and it turns out that it works very well. So here's an example. So here, we have a constant data model and- so we have a bunch of data points here shown in red, and we have our Theta, which is the parameter that specifies the model. Our model says that we expect all of the data to be close to Theta. In other words, we have a loss function, which is the norm of x minus Theta squared. First thing we do is we pick the Theta given the data, and that gives us that the Theta is the mean of the data points. The second thing we do is we say, well, we've got a Theta, and now we're going to find- solve an imputation problem. In our imputation problem, we are given an x with a missing entry. So we know x, it has two components. The first one is unknown and the second one is 2.8, and that means that the true x is somewhere along this line, somewhere along the line of vectors x, which have second component equal to 2.8. We get to pick x_1, and the way we're gonna do that is by minimizing the loss function over x_1 subject to the constraint that x_2 is 2.8, and that gives us this point right here. We get to move along this line, along that line, to get as close as possible as we can to Theta, Theta being right there. And that turns out to be x hat 1 is 0.79. If we have a k means data model, well, here, we have an x with some unknown entries. Now, the loss function is the minimum over i of the norm of x minus Theta i squared. So in other words, you look at role of your k different Theta vectors, your k of different archetypes. Pick the one that's closest to x, and look at the distance between x, and that closest archetype squared, and that gives you the loss. Now, we've got an x that has some missing entries. And so when we do this- well, some of the entries are fixed, we can't do anything about those. And, uh, the other entries, we get to minimize the loss function with respect to. So the first thing we do is we say, well, let's find the nearest representative Theta j to x, but we can only use the known entries. And that means that instead of looking at the 2-norm, we have to look at the sum over all of the known entries, the i's in k, and look at that sum of x_i minus the ith component to Theta_j squared. That is the loss function. Here, evaluated- let's include the minimum. Evaluated where we've allowed the components that we don't know to be free. And when we allow the components that we don't know to be free, well, they gravitate to be such that x_i is equal to the ith component of Theta j because that minimizes that loss. And so we end up with a loss that has only these terms left in it. So just to be explicit about that, the minimum over i of the norm of x minus Theta _i norm squared, that's our l of x. And what we're trying to do is we're trying to minimize l of x subject to the constraint that x hat i is equal to x_i for i in the known set. And if I take the minimum of this l of x, I end up with minimizing over x hat the minimum over i of the norm x hat minus Theta. Because I've used i twice, let's use j for one of them. So replace that one with a j. x minimum over j minus theta j norm squared. And this minimum on the outside is a minimum with respect to the unknown components. Minimize with respect to unknown components of x hat. And if we minimize with respect to the unknown components of x hat, this quantity, well, one thing I can do is swap the order of the minimizations, and that becomes the minimum over j, the minimum over the unknowns of the norm of x hat minus theta j. And this, well, if I'm- if I'm allowed to choose the unknown components of x hat to minimize that quantity, what am I gonna do? I'm gonna make the unknown components equal to the corresponding components of theta j. And that leaves me with this part of the cost function here equal to just this. So that, ah, that tells us how to pick the, ah- the representative corresponding to a particular x tells us how to pick j, and then what we want to do is we want to pick the unknown components. Well, we already know what they are. They have to be equal such- they have to be such that x hat i is equal to the ith component of theta j. So for the unknown entries, what we do is we guess the entries of the closest representative. So we have a- a data model, and just like in supervised learning, we need to be able to validate that our data model is actually good, and our imputation method is actually good. And we do this in the following way. We split the data into a training set and a test set, we use the training set to bill to the data model, and then what we do is we look at the test set and we mask out some of the entries in the test set, we pretend they're unknown. And we impute those entries and then look at the average error of the imputed values. So the RMSE, for example. And that validates that our imputation method is working correctly. And we would typically pick the training and the test split randomly or by K-fold validation, and that would be fine, and then we would normally, to do the masking, we would typically mask randomly, mask random entries within the test set. So how do we fit a data model? We have, um, x_1 to x_n. And ah, let's suppose we have no missing entries and we have a parameterized implausibility function l theta of x. How do we choose the parameter theta? Well, we minimize the empirical risk, the average implausibility. That's 1 on n times the sum from i is 1 to n of l theta of x_i. We choose theta to minimize this, and we may have some constraints on theta. Sometimes the allowed thetas are limited in some way. Um, and then we choose the parameter theta so that the observed data is the least implausible. Let's look at the simplest case, the sum of squares loss. So loss function is l theta of x is the norm of x minus theta squared. Empirical loss is the average of that quantity. And we've seen this before when we were looking at the constant predictor problem with the square loss, that the best choice of theta, the minimizing choice of theta, is the mean of the data vectors. If we have the absolute loss, some absolute loss, well again, this is, ah, exactly like the constant predictor case. Here in the unsupervised case, we will have a loss function which is the norm of x minus theta. The 1 norm is the sum of the absolute values of the components of x minus theta. And we look at the empirical loss, and the optimal theta is the median of the data vectors x_1 through x_n. And what this means is the element-wise median of the data vectors. And the k means model goes like this. The implausibility function or the loss function is the- the norm squared of the distance between x and the closest archetype theta j, and so our parameter is a d by k matrix. Equivalently, it's simply K d-dimensional vectors theta 1 through theta k. And the empirical loss is therefore the, ah, ah, average of that over all of the data points. So it's one on n, the sum from i is 1 to n, and then for each i, we have to find the closest, ah, archetypes theta j and look at the distance between x_i and theta j. And of course, which closest archetype that is depends on which data point you have. And this is the- called the k-means objective function. And we use an algorithm called the k-means algorithm to minimize this. So this is one of the cases where we have a specific problem, which is the k-means optimization problem we've discussed, and the algorithm has the same name, which is a little confusing cause one could use different algorithms to solve the same problem. So- but the most commonly used algorithm is called the k-means algorithm. And the way it works is that for any given choice of these k vectors theta 1 through theta k, well, I've got these k vectors and then I've got a bunch of data points. And each data point, its contribution to the loss is the distance to its closest archetype. And so we can say let's assign to each data point a corresponding archetype, the archetype which is the closest to that data point. And we're gonna label that assignment with a vector c in R_n. And the idea is- is that c_i is a number between one and k, which tells us which of the thetas has been assigned to that data point. Otherwise, which of the thetas is the closest to that data point. And then this loss function, which is the empirical risk, is one on n, the sum from i is 1 to n of the distance squared between x_i and its closest archetype. Well, that's explicitly 1 on n times the sum from i is 1 to n of this quantity x_i minus theta c_i, because c_i is the assignment. Now, what we want to do is we want to choose both c and the thetas. So we have to choose the assignment and choose the corresponding thetas. Once we've chosen the thetas, the assignment is easy. All right? The assignment is that we assign to it, to the ith data point, the closest archetype. So we find over j the minimum of x_i minus theta j, the norm of x_i minus theta j squared. And that's what c_i is. It's- it's which of the js minimizes that quantity. How do we minimize the Thetas? Well, we can minimize this quantity once we fixed to the c_is. So first of all, we'll do the assignment, we'll find the best c_is. And then minimizing the c is just straight for- over the c is just straightforward because each of these, this quantity here splits up into k different terms. One for those data points assigned to Theta_1, another for the data points assigned to Theta_2 and so on. And so it becomes- well, it becomes this. It becomes, uh, 1 on n times the sum over all i such that c_i is 1 of the norm of x_i minus Theta_1 squared plus 1 on n times the sum over all i such that c_i is 2 over the norm of x_i minus Theta_2 squared and so on up to k. And each one of those, we- we would like to minimize that sum over Theta_1 through Theta_k. Instead to find the minimum over Theta_1, we minimize this over Theta_1. And minimizing that over Theta_1 is simply step- tells us to set Theta_1 to be the mean of the corresponding x_is. To minimize this over Theta_2, we set Theta_2 to be the mean of the corresponding x_is. In other words, all of the x_is which had been assigned to category 2, and so on. Once we do that, well, then we've got new Theta_is and that means that the assignment might change. And so we reassign. We go- we- we go through and once again pick the c_is that- that assign each x_i to the closest Theta_J. That's a new assignment. And that means that the Thetas are gonna change. And so we go through and we update all the Thetas to be the corresponding means of their assigned data points. And we alternate between these two steps. Assign data points to archetypes, then adjust the archetypes to be the mean or their assigned data points, then assign data points to archetypes, and so on. And this is a heuristic for approximately minimizing this empirical risk. Here's an example. So we start out with, uh, some guesses for Theta, and we might initialize those completely randomly. That would be very common. So here there are three guesses. This is say, Theta_1, Theta_2, and Theta_3. Um, and now, given those guesses for the Thetas, we can make assignments. All of the points for which- which are closest to Theta_1 than any of the other Thetas have been colored in red. The points that are closest to Theta_2 are colored blue, and the points that are closest to Theta_3 are colored green. Remember, we don't have any colors associated with our data. Our data is just points. But once we've picked Thetas, we can label the data points. And those labels are the c_is. So all of these red points, so those points for which c_i is 1, the blue points are points which c_i is 2, and the green points are points which c_i is 3. So now we've got an assignment, right? We've picked the c's by painting the points in their appropriate colors. Now we can see that well, these Thetas don't minimize the empirical risk with those assignments, because I can make the empirical risk smaller by moving this Theta to the mean of all of these data points, which is somewhere here. And I can move this Theta to the mean if it's assigned to data points there. And this one doesn't move very much. Maybe it moves a little bit in, uh, this direction, say. So I move them and this is where they end up. And now that I've moved them, I can reassign each data point to its closest archetype that changes their colors. So now I've got the same data points, but I've recolored them according to their closest archetype. So now these are the red ones here, these are the blue ones, these are the green ones. And we can see that some data points have changed from red to blue and some have changed from blue to red. None of the green ones have changed. And so now, I've reassigned data points. My archetypes are no longer in the best possible place. I adjust my archetypes to be the mean of their corresponding data points. So this one's going to move up a bit, this one's going to move down a bit. And, uh, I keep going until I converge, alternating between assigning colors and taking the means. And here's this little convergence you see where this is converged very quickly. After four iterations, we've completely converged. And- and notice that this converges perfectly in the sense that once we've- uh, once we're not changing the assignments c anymore, well, then the Thetas don't change either. And so this isn't, uh, uh, this reaches a point where the Thetas are stuck and no longer move at all. Here, we can look at the, uh, both the te- the training loss and the test loss and the imputation error. And so you can see here in blue, there's the training loss. Um, in green, you can see the test loss follows pretty closely. And then the red is the imputation error. And this tells- and this is as a function of k, the number- the parameter, which is the number of archetypes we're choosing. And so how do we choose this? Well, this suggests that maybe we should pick a k somewhere around 4 or 5 or maybe even 3. Um, so I guess this is 1, this is 2, 3. So we should pick a k that's 3, which is quite reasonable if we look at our dataset. Our dataset really does split naturally into three clusters, and our algorithm has found that. And then we can validate by removing either u_1 or u_2 from each record in the test set and computing the RMSE. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_1_course_information.txt | Hello and welcome to EE104 or CME107. This is the course, uh, Introduction to Machine Learning at Stanford University Spring 2020. My name is Sanjay Lall. I am the instructor of this class. This class was written by myself and Professor Stephen Boyd of Stanford, over the past three years. Uh, we are, as you know, in the middle of a coronavirus outbreak. And, um, as a result, much of the university is closed and many of you are working at home. Um, the course is therefore going to be entirely online. All of the lectures will be pre-recorded and we will not have any live lectures unfortunately. Uh, we are aware that many of you are- are watching this- these lectures from a variety of different time zones, and we will do our best to, uh, accommodate that. Uh, we will post new lectures every Tuesday and Thursday morning, and we will hold office hours, uh, via Zoom. All right, due to the current situation, there will be no exams in this class. Uh, all classes in the School of Engineering are going to be credit and- or no credit only, and this class is no exception. Um, uh, but there will be homework. Uh, it will be, uh, given out weekly. The homework will be about half programming and the other half will be conceptual questions, maybe a little bit of mathematics. Um, the programming questions we'll be using Julia for those. Uh, Julia is, uh, a- a- a modern language which is developed primarily for numerical computing and machine learning in particular. It's very fast. It's got a very nice syntax which makes it, uh, very simple to write sophisticated programs. You do not need to have a very sophisticated understanding of the Julia Programming Language. Uh, most of the coding that we will do in this class is very short scripts. We will be using the Flux machine learning package, that is a- a package that sits on top of Julia. They're some prerequisites in this class and in particular linear algebra. We will be using a lot of the notation of linear algebra and we will need you to, uh, be familiar with things such as least squares and a basic understanding of eigenvalues and eigenvectors later in the class. Oh, we don't need anything more sophisticated than that, so any first class in linear algebra will cover enough. Similarly for programming, we do need you to be able to write code, uh, but very simple code will suffice where they said we're going to- already going to be writing short scripts. Uh, but you do need to be familiar with how to bring data in, how to generate plols, how to understand what data types are. You don't need to have any understanding of complex data structures or object-oriented programming or anything like that. Uh, probability is a co-requisite, not a prerequisite. So, uh, any basic class in probability will do. Uh, you can, uh, if you haven't taken it up until now, you can take it concurrently with the class and that's fine. We won't be using probability for the first few weeks, um, but after that we'll start to use the, both the language and some of the ideas and methods of probability as well. And that brings me to the end of the mechanics section of, uh, the class. Uh, I hope you enjoy EE104. |
Stanford_EE104_Introduction_to_Machine_Learning_Full_Course | Stanford_EE104_Introduction_to_Machine_Learning_2020_Lecture_9_house_prices_example.txt | Welcome to Introduction to Machine Learning. This lecture is an- an example and the idea is that we will go through a very simple example using all of the topics that we've discussed so far and, uh, take a little bit of look at sort of results one can expect, and also the, uh, code that's necessary to get it to work. So this is, uh, a data set that comes from Kaggle. Kaggle is, uh, a- a Google owned company that, uh, organizes machine learning competitions. You can go to Kaggle and download, uh, data for a range of different, uh, uh, domains. In this case, this is, uh, house prices. Uh, Kaggle also keeps, uh, data sets in Escrow so that one can compare the results of your own machine learning algorithm against, uh, uh, uh, validation set, which you've not seen before and a lot people haven't seen before. And it also keeps track of all the people's performance on particular data sets. So it's a worthwhile place to go to get experience with, uh, trying machine learning in a variety of domains. Uh, this is a- a data set that consists of, um, prices and features for 1,456 homes in Ames, Iowa, and those were homes that were sold between 2006 and 2010. And, uh, here, our goal is going to be to try to use the features of the houses in order to predict the price. And we're gonna focus on predicting the log of the price because the price is- relative price is, uh, much more important than absolute price and house prices typically vary over a significant range. Uh, so our performance metric is going to be the RMS error on the test set of the log of the house price. And in particular, if you have, uh, an RMS error of say, 0.1, then it means that you can predict hash prices within a factor of e_0.1, which is about 10.5%. Here's the sort of thing one sees in this data. Here we have a plot of the target variable, which is the log of the price against one of the, uh, independent variables, the, uh, living area of the house in this case. And you can see, well, first of all, there's, uh, uh, uh, a quite a lot of variation. Just knowing that living area doesn't narrow down the price very much. Um, uh, here we've got, uh, house prices varying between e_10.5 and e_13 or e_14, um, which is, uh- must be less than 100,000 to more than 500,000. And, uh, so we're seeing quite a- quite a variation here. Now, the data set actually contains some 80 features. We're gonna to use, uh, maybe, uh, the first, uh, 20 features and, uh, focus on, uh, those. So for our embedding, uh, for embedding the, uh, target variable, we're going to let v be the price and y be the log of v, it-s the log of the price. And then for the independent variables x, we're going to, uh, embed them as follows. We have, uh, some of those fields of a house record are numerical. And we- those we can just embed unchanged. So in particular, we have the year the house was built, the area of the living space, the area of the first floor, the area of the second floor, the area of the garage, the area of the wooden deck, area of the basement, the year and the last remodel, and the area of the lot. So all of the areas are in square feet, they're just numbers, and the years are simply years, they're just integers. And we will just embed those as numbers as they are. There are also ordinal fields in our features, and, uh, we will embed those as integers. So we have, uh, number of bedrooms, number of kitchens, number of fireplaces, number of half bathrooms, number of rooms, condition. Condition is a number that's scored between 1 and 10 that's assigned by- by an expert, uh, presumably an appraiser or- or a real- realtor. We have the quality of the materials and the finish, again, assigned by an expert, with a score between 1 and 10, and the number of cars that the garage can hold. So these are all small integers, typically between 0 and 10 and we just embed them as- as they are. The kitchen quality is, uh, a field which is stored on our Likert scale. The kitchen is rated excellent, good, typical, fair, or poor, though I don't think there are actually any entries in the data set that receive the poor rating. And this is encoded as an integer between 1 and 5 after we embed it. Uh, the building type is a categorical field. This is embedded one-hot and so it's embedded as a five-dimensional vector, one of the canonical unit vectors with a 1 in one position and 0 in all the other positions. Uh, the five different categories are single-family, townhouse end unit, two-family-conversion, townhouse inside unit, and duplex. Um, there's also a neighborhood field. There were 25 different neighborhoods. So it's a categorical data field and this is also one-hot embedded. As a result, we have, looking back at our, uh, fields here, we have 17 numerical fields. And we have, uh, uh, the kitchen quality, which is 18, and then we have, uh, 30 components which are one-hot. So the total dimension of our X data variable is 48. And we're gonna add one more, which will be the constant. Now, when we do the standardization and the data splitting, we do this in a particular way. So we split the data randomly, 80/20, 80 percent for training and 20 percent for test. And that gives us, uh, an X_0 training set and a corresponding Y_training, and an X_0 test set and a corresponding Y_test. Uh, now, the way we do standardization is we use the training set to compute the means and the standard deviations of each of the features. So that means we'll get 48 numbers correspond to the means of each column of X_0 train and 48 numbers correspond to the standard deviations of each column of X_0_train. And now we can use the means and standard deviations to standardize X_0_train, um, simply by, uh, subtracting off the means and dividing by the standard deviations. And we use the same means and standard deviations to standardize the test set. Now, in particular, here we don't want to use the test set to compute the means and the standard deviations because that would be including information from our test set into our predictor. Uh, there's also a particular, uh, caveat that you should be aware of and that is because th- many of our variables are categorical. It- it often happens that particular columns of X_train are actually all zeros. There's no data record, for example, that corresponds to a house in a particular neighborhood. Um, and that will give you a standard deviation of zero. And so if you try to apply the straightforward standardization, uh, then of course, that will fail because we'll be trying to divide by 0. But that's very simple. We do it if we simply include that as a column of zeros in the data set. So after we've standardized both the- the training and the test set using this- these means and standard deviations, we can then apply- append a constant feature to both of them and then we'll have X_train and X_test, and X_train will be 1165 by 49 and X_test will be 291 by 49. So now we're going to do Ridge regression. So we're going to, uh, use the- remember the RMS Log Price as our performance metric. And so our loss will be the quadratic loss, we'll be minimizing the empirical mean square error in the log price. And, uh, uh, regularization, we'll use Ridge regression, so we will use the quadratic regularizer. So remember how we do regularized empirical risk minimization? We choose a range of lambda values logarithmically spaced. Here we choose them between 10 to the minus 3 and 10 to the 3. And for each one of those lambdas, we solve the regularized empirical risk minimization problem to find the theta. We choose a theta, we find the theta that minimizes this quadratic function of the vector theta. Notice that we've used a capital Y here, even though capital Y here is actually just a vector not a matrix because m is 1, we've got only a single target variable. Um, and notice also that we're not regularizing the constant term in theta. Once we've got these, uh, uh, theta values, I think we have 50 different lambda values and so we get 50 corresponding theta values. Then, uh, for each one of those theta values, we can compute the training error simply by computing the RMS of x train times theta minus y train. That's just a vector. So, uh, we take, uh, the one on n times the sum of the squares of that vector and then square root that quantity. And similarly, the test error, X test theta minus y test. And for this dataset, this is what we see. Here on the left, we have, uh, two curves, uh, plotted against theta. We have the empirical risk of the different thetas. So at any given lambda value, we have a corresponding theta, we have a corresponding test error and a corresponding train error. We can see that at all thetas the test error is a little- does a little bit worse than the training. And, in fact, regularization appears to offer no benefit here, the data is sufficiently- we have a sufficiently large amount of data compared to the number of features that we have that there's no danger of over fitting. And as a result, the regularization does little for us. Um, the minimum RMS error is about 0.12, which corresponds to about a 13% error in house price. Over here on the right, we have the plot of theta verses lambda as a regularization path. We can see that even with lambda about, uh, 0.1, we're starting to see some shrinkage. The components of theta are getting smaller without any loss in performance. So a reasonable choice of lambda would be something of that order, somewhere between 0.1 and 1. Here we have the test data and each point on this plot shows two values. The true price, Y. This is of course the true Log Price and the predicted lower price Y-hat. And ideally we would expect- we would want to have all points on the diagonal indicating that we've predicted perfectly. And the points are clustered reasonably well about the diagonal. We're not doing badly. We have some outliers here and here, where we are vastly overestimating the price. Also here. Well, otherwise we're doing okay. And perhaps one thing one might do after making these kind of predictions is to go back and look at those data records and figure out why their true price for those is so much less than the predicted price. We can also look at the theta. The entries of theta tells us something about the importance of the different features. And we can see that this one, this one, this one and this one are important features as is this one, this one and this one. Those are the theta entries that have the largest magnitudes. They correspond to these seven features. Four of those are about the size of the house, area of living space, area of first floor, area of second floor, area of basement. Very reasonable things that would drive the price of a house. Um, one of them is the year in which it was built. So how new is the house? And the other two are the experts assessments of the condition and quality. And so it's quite reasonable to see that these things are as the most important determinants of the, uh, price of a house. You can also notice that there are interesting points here. Um, the last 25 features in x, are one-hot encoding of the neighborhood. And if we look, for example, at this point and this point, this one's about 0.02 and this one is about minus 0.02. And that means that, if I move from this neighborhood over here to this neighborhood over here, well then this first component here, the 30, first 32nd, 33rd component of x switches from a 1 to a 0. And the 30th component of x switches from a 0 to a 1. And so my contribution of theta transpose times x from those two components changes from minus 0.02 to plus 0.02, which means that the house price changed by about 4%. So just by moving from one neighborhood to another, we can see that much change in house price. And these- these last few entries of theta tells us which are the desirable neighborhoods and which are the undesirable neighborhoods. Okay. Let's take a look at the code. Um, first of all, we can just run it and see, uh, if it does what we think it should do. Let's include, the file is called house.jl. And, uh- The main function here is what we're going to run, and that should do the computation, uh, produce the plots. Every time you run it it will produce slightly different plots because remember the test and train split is chosen randomly. Uh, we can see here, this is just the raw data file. Uh, this is another piece of raw data. Here we can see, I think, these features we can see what we- what we've plotted here. Uh, no. So this code will be available on the website. Uh, there are two files in particular; house.jl, which does all the computation, and then there's another file called houseplots.jl. And houseplots.jl does the plotting. That one requires PyPlot where you'll have to modify it to use whatever plotting package you are using. Uh, so here we are plotting two things; we're plo- plotting the lot area and the living area. And so, uh, versus price, and so here are our two features. This is, uh, living area, um, and this is lot area. And we can see that there's quite a few, uh, really extremely large, uh, houses which shift, which are already outliers in our data set. Uh, and so if we were to go through and, uh, remove those or adjust for those in some way, we might find ourselves doing slightly better in our fits. One can open the data file in a spreadsheet, and one can see the 1,500 or so different records and the corresponding 80 or so different fields. [NOISE] Now, if we look at the plots that were generated, this is our regularization path. That's a- now I don't need the plotting file anymore. This is our test and train errors as a function of Lambda, and what's back there is our prediction versus true value. Let's take a look at how this works. Uh, so I have two windows on the screen; one is my Julia terminal, and the other is my editor containing the Julia code. This Julia code is about 150 lines long to do everything that we did today. Um, uh, the first thing it does is it loads the data. We can just take that and paste it into here. Uh, that gives us two things; it gives us D, which is a matrix, which is an array of strings, 1,456 rows by 81 columns. The 81 different fields, I guess the first field there is the identify- an identifier, this was 80 different fields. And every entry in this matrix is a string, and that's loaded by a function at the top of this file called load data, which doesn't do anything very interesting, it just calls the CSV library to load it. It does one thing here, which is it removes, uh, what turns out to be a couple of outliers. There were four of them, which have, um, uh, living area greater than 4,000 square feet, and those are, uh, uh, really quite extraordinary, uh, houses in this data set. So we'll remove those, and then it just returns for us the data D and the header, which is a list of the field names. Now we can take n as the number of records we're going to have. It's the- now we do two embeddings; one is Y, so embedy. Let's take a look at that. What that does? This is the function definition here, it's a one line function definition. Um, uh, some things to notice. Uh, let's, uh, let's take a look at this. Uh, so getdatafield si- simply pulls out, uh, that column of the data. So there's the prices. Um, let's just call that something u. Oops. There we go. And in a stringtonumber- these are all strings just from the way the CSV format is stored. So we can stringtonumber it. Let's just call that u1, and now we've got an array of floating-point numbers. Now, one thing to notice about this, this is a feature of Julia that's worth being aware of, is the dot notation. So, uh, stringtonumber will happily take a string and return you back a number. But actually what u is, it's not a string, it's- it's an array of strings or a list of strings, and so by using the dot notation, so if we were to call stringtonumber on just u, it would say error. This u can't call the function that stringtonumber calls, which is the parse function on an array. But you can, um, do this, and what that does is it cause- causes stringtonumber to be applied to each entry of the array u, and it constructs an array of the results. So if I give it an, uh, an array of strings, it will happily give me back an array of numbers if I call it with the dot. And so we're just applying stringtonumber to each of the entries of that data record. Um, that's what u_1 is, and then we're applying the log to each of the entries of that to get Y. And so if I call embedy D header, it returns me back a vector which is the log of all the prices. That's what Y is. Now, we can, uh, embedx. This is substantially more complicated because we got a bunch of different fields. Uh, let's take a look at that. First of all, let's see what it does. That returns for us x, which is our 1,456 by 48 array of the different fields that we have, 20 fields, some of which are encoded as one-hot, and so they correspond to more than one column of this matrix, and all of our data records, all 1,456 different houses. Now the way this works is that, um, uh, in the embedx function here, this is the function definition. Uh, we can see these two convenient functions defined at the top. These are just for convenience. What they do is they're closures, they store the value of D and header so that whenever I call field name inside this function embedx, I don't need to supply D and header. And so when I call real fname, that's just the same as calling stringtonumber dot of getdatafield of D header name. So realf here stands for real field, and so what we're doing is we are, uh, we can define these functions in the Julia loop here on the left. And now if I do realf of YearBuilt, we can see what it's gonna do. It's gonna stringtonumber the data field corresponding to YearBuilt, which is just a list of those numbers. And we can do- we do this for each one of these. Let me get simply a list of the corresponding field numbers. Now we can also look at, um, uh, some of the more complicated ones. One here that's more complicated is the, uh, kitchen quality field. If we look at the- the kitchen quality field, that's a likert scale, the entries in it. Uh, Gd for good, TA, for typical, excellent and, uh, uh, there may be- uh, there are others, we can, uh, unique that and see all of the unique entries in it. Good, typical, excellent or fair. The unlikert function, um, what that does is it maps these particular strings to numbers. Here it is. So unlikert sets up a dictionary which maps Ex to 5, good to 4, TA to 3, fair to 2 and poor to 1, and returns the corresponding number. And so if I apply unlikert to that, I get a list of numbers. Notice the dot again, because I'm applying the unlikert function to each entry of the array separately and returning back an array of the results. Uh, there's one more little piece of conversion that goes on and that's the one-hot conversion. If I look at, say the field building type, let's look at the smaller of the two. And that again is a list of strings. It's categories, five different possible categories. We can look at what they are by doing unique. There are the five categories. And the one-hot here, one-hot does not apply work entry-wise, one-hot works on the entire list of 1456 strings. And it simply finds the categories which are here, uh, unique. u, so we have u is equal to that. Unique of u is gonna be the categories, and then it constructs a matrix which is- each row is a canonical unit vector, which is five dimensional, which is five is the number of categories. So we can see what it does if we do one-hot of u, there's our one-hot encoded building types. Here we have hcat, which joins all of these columns together into one large matrix. And so that if I call embedx, I get the large matrix of all of our data. Now, test- trainrows and testrows are just lists of rows which are randomly selected. Split there is 80-20, when we call it, we'll get a new split. Let's see what that looks like. Yeah, so train rows just says, "Well, you should use these particular rows as your training set." and these particular rows is your test set and their- that's all of the rows together and that's a- there's a disjoint. Now what we do is we split up the feet, the- the data set. Applysplits does the- um, does the- the thing you might expect. Let's look at what applysplit does, applysplit simply splits the data, picks out the corresponding rows for the train and the test set. Let me just run that, and now I've got an Xtrain0, and an Xtest0. My sets of features corresponding to the training and the test sets, and the corresponding Y's. Now the getstatistics function gets the means and the standard deviations of each column very simply. So let's look at what that does. So now how many means have I got? I've got 48 means because Xtrain0, remember, had 48 columns and certainly I've got, uh, 48 standard deviations. You can see there's quite a variation, um, in their, uh, means and in their standard deviations, which is why it's important to, uh, standardize. Uh, standardizeplusone, convenience function that goes through and, uh, does the standardization transformation along each column. Divides, subtracts the mean and divides by the standard deviation. Except in the case when the standard deviation is 0, which can happen. In which case, we simply subtract the mean. And then does one more thing after doing the standardization, which is it appends a column of 1s. Of course we can't do that before standardizing because that will just have its mean subtracted off. So if we do that to Xtrain and Xtest, well that gives us our true Xtrain and Xtest from Xtrain0 and Xtest0. Now we're constructing lambdas, this is our list of lambdas. Note, what we're doing here is dot to the power of, which means, again, it's a broadcasting call of the function, call to the function power this time, so it applies 10 to the power of each of the elements of the range. So we have the range here which is between, uh, uh, minus 3 and 3, that's a- has a particular datatype which is a code on in, it's a range. If you want to see it as a list, you can by doing collect, and then you'll see it as a list. I do lambdas to it, then I'll get my list of logarithmically spaced between 10 to the minus 3, and 10 to the 3 lambdas. Then we do- we do the ridge regression. We've seen that function before, this is applying it one by one to each of the lambdas. For each lambda in lambdas, call ridge regression to give me a, which returns a Theta and make a list of the Thetas. That's what this notation means, this is called a list comprehension. If we run it, there we go. We've got a list of Thetas, we can look at the size of the Thetas variable. And it's just a 50 dimensional list. Theta's one, the first entry is a 49 dimensional vector and same for the other. Each one of those is the Theta corresponding to a lambda. Now for each one of those, we do another little for loop here, another list comprehension for each Theta, we compute Xtrain times Theta, which is, remember, the prediction of Y on the training set elements. And we compute the RMS error of all of the training elements, and the same for the test errors. So now we can do one more thing, which is find the minimum of the test errors. Here, this tells us that the 16th element of test errors was the smallest and the corresponding test error is 0.122. We can also see what the corresponding Theta is, by looking at the corresponding entry of Theta, we can see what the corresponding lambda is. There's the corresponding lambda. And we can just work out what those are, right there. And everything else, and now it's printing. We print our results. Printing optimal train, optimal test, optimal lambda, and optimal Theta. Uh, and then there's a plotting which we'll go through and make the plots that you saw, it needs all of these data elements to do that. And this is really how one would do everything that we've seen so far in the class to do with regression. We can create more complicated features for X, we can create, for example, uh, more one-hot features for- instead of our, uh, simple real number embedding of our ordinals, we could create product features and, uh, we could pick out the outliers. Uh, and there's a few things we could do. Now in fact, for this dataset, none of those things seems to make a great deal of difference, um, which is why we haven't done them. Now, we'll see other examples where we can do some of the more fancy things that, uh, involve more fancy embeddings and more fancy regularizations. |
MIT_913_The_Human_Brain_Spring_2019 | 6_Introduction_to_the_Human_Brain.txt | [DIGITAL EFFECTS] NANCY KANWISHER: So to remind you, we've been talking last week about doing two things at once-- asking all sort of questions of what we might want to know about face perception in the brain-- there are some questions. But at the same time, the agenda has been to consider the different methods available in human cognitive neuroscience and what kinds of questions each one can answer. So last week, we talked about a bunch of them, and today, we're going to wrap this up talking about TMS and animal studies. But first, I just want to remind you very briefly-- I won't go through in excruciating detail-- we talked about behavioral methods, which are great for characterizing internal representations, as you saw with face inversion effects and some of the other behavioral data, they have major disadvantages, which is that with behavior, you're just measuring the output. It's pretty sparse. And from that, you have to infer all the stuff that happened in between the retina-- or whatever your sensory modality is-- and the output. All that internal mental stuff you have to infer just from the output. So it's amazing that that works at all, and you have to be really smart to do it. And lots of people have been doing that for a long time, but it's challenging. So why not look inside? And one of the best ways to do that, of course, is functional MRI. It has the best spatial resolution available for normal subjects. But as you guys all seem to pick up on, its temporal resolution is lousy, and its ability to tell you whether the neural activity you're looking at is causally involved in behavior is like nil. Now, at least one of you-- I only read a few of the assignments-- but at least one of you was confused about which causal role we're talking about. And this is actually really important. So let's take a moment to talk about this. Causality, the idea of causality-- if x causes y, that means, essentially, y wouldn't have happened without x, or y happened more because x happened, when x happens than when x doesn't happen. So that's pretty basic. So that means if you want to test the causal role of x on y, you have to mess with x. That's the key challenge. OK, so with that in mind, here's this whole causal chain. A stimulus lands on a retina. A bunch of neural activity happens, and some behavioral output happens. So there's a whole causal chain there. Now, let's consider what kind of causality we're talking about. There's one kind which is that the stimulus causes neural activity in the brain. That's a kind of causality that we can absolutely test, even if we're measuring that with functional MRI, because we can mess with the stimulus. We can present different stimuli and produce different neural activity, OK? So in that case, we can look at the causal effect of the stimulus on the neural activity. No problem. That's standard. That's what we do, pretty much, in every experiment with ERPs, or functional MRI, or so forth. Is that clear? OK, on the other hand, if we want to know this kind of causality from some neural activity we measure in the brain to either a behavioral response, or a subjective feeling reported by a behavioral response, or something like that, that's the challenging part. That's the kind of causality that we can't infer from ERPs or functional MRI. Everyone got that? Yeah, OK, it's sort of obvious and not obvious. OK, so let's talk a little bit more about that temporal resolution. I know I kept saying the temporal resolution of functional MRI is lousy, but I had run out of time and skipped through the key slide, So let me back up and do that here. This is the BOLD or MRI response as a function of time. This is an idealized version of it, but it looks kind of like that. And sorry these things are tiny here, but these are seconds-- 5 seconds, 10 seconds. So let me show you what that means. If you're recording neural activity, back here, in the first stage of visual processing in the cortex coming up from the eyes-- which is where? AUDIENCE: The occipital lobe. NANCY KANWISHER: The occipital lobe, yes. What area? AUDIENCE: Primary visual cortex. NANCY KANWISHER: Primary visual cortex, exactly. OK, so suppose that we stuck an electrode in my primary visual cortex, and we flashed up a very brief visual display. OK, here's a visual stimulus, on for maybe a tenth of a second-- bright, flashing thing V1 loves-- V1, also primary visual cortex, right? OK, the neural activity would happen in less than 1/10 of a second-- super fast. We know that from work in animals and even some work in humans, OK? So super fast after the stimulus. It just goes straight up from the retina, the LGN V1-- boom, there it is. So all the neural activity happens right there, and it ends right there. But the MRI response is five, six seconds later-- this big, sloppy, slow thing as the blood slashes into V1 many seconds after the relevant neural activity. So what's relevant here is not just that it's delayed, but it's big and sloppy. And so both of those things are the reasons why functional MRI responses aren't good for distinguishing what happens on a fine temporal scale of, say, events less than a second. All right, OK, in contrast, as I mentioned, when you glue electrodes on the scalp, or stick these fancy magnetic sensors in the big hairdryer device around your head, there you get beautiful temporal resolution, but it's like the Heisenberg principle of cognitive neuroscience. You want time, you don't get space. OK, and similarly, here, we can measure the causal effect on scalp response neural responses, but not the causal role of those neural responses on behavior. Everyone clear with this? OK. OK, then I talked about these rare cases where we can record directly from the surface of the human brain with electrical activity, where we now get both space and time at the same time. And the key disadvantage there, of course, is that it's extremely invasive. You have to take a big piece of skull off to get in there. And, of course, that would only happen in the case of people who are already in pretty serious medical circumstances. OK, so now, when we have this incredible opportunity to record this amazing data from the center of the brain, does that enable us to make this kind of causal inference from neural activity to behavior? Yes? What do you think, yes? No? Isabelle? Is that Isabelle? Yes. Why are you shaking your head? AUDIENCE: Because it just tells us which neurons are responsible for [INAUDIBLE].. NANCY KANWISHER: That's right. It's cooler, it's fancier, it's more impressive than functional MRI or ERPs, but it's still the same deal. We're just recording responses, OK? So we can do this causality, from the stimulus to those neural responses, but it doesn't tell us which of those responses are related to behavior yet. I showed you other methods that do, but this one alone doesn't. Everybody got that? All right, so then I talked about studying patients with focal brain damage. And here, you really can make a strong causal link between a bit of brain and a behavioral ability. You lose that bit of brain, you can no longer do that task. That's a really direct kind of causal role. I talked about double associations. I gave it short shrift, but it's actually really important. You should know it. A double dissociation is when you have one patient who can do A but not B-- say, recognize objects but not faces-- and another patient who can do B but not A-- say, recognize faces but not objects. And when you have in the literature two cases like that, now you're in a really strong position to infer that there's something fundamentally different about face recognition and object recognition in the brain. OK, so that's really important-- the senses in which a double association is more inferentially powerful than a single association. OK, "more important" means I'm sure to test you on it. No, it's also important, whether I was going to test you on it or not. [LAUGHS] OK, and so, of course, in focal brain damage, we can absolutely infer causal role from a bit of brain to a behavioral ability. Lose that bit of brain, lose the ability, yeah? OK. And the case that I showed you with that amazing movie of the guy getting stimulated in his fusiform face area and seeing percepts of faces on top of whatever he looked at, that's a quintessential beautiful example of the causal role of neural activity there. We're basically directly manipulating neural activity. We're injecting neural activity there electrically and looking at the behavioral and cognitive result that occurs-- The guy. Sees a hallucinatory face. OK, now, that is amazing data, but as I mentioned, they're very rare. We have no control over it. When we get those data, we celebrate and are all excited, but mostly, we don't get those data. Plus, those people have serious problems with their brains. That's why their brains are being opened up. So is there any way to test a causal role of a particular part of the brain in a normal subject who doesn't have their skull open for neurosurgery and who has not had brain damage? Well, there's one way, and that's called transcranial magnetic stimulation, OK? So in transcranial magnetic stimulation, you take a coil of wire about yea big. That's a tight-wrapped coil of wire embedded in plastic, connected to a ginormous capacitor, and you hold it next to your head. Of course, that's what you would do if you were a neuroscientist. And you discharge and make an enormous current through that coil that's very, very strong and very brief. The whole thing lasts less than one millisecond. And you guys know from 8.02, another case of the right-hand rule coming to our service. You have a hell of a current going in a coil. What's going to happen in brain tissue underneath? AUDIENCE: Increase the magnetic [INAUDIBLE].. AUDIENCE: The electric field will [INAUDIBLE].. NANCY KANWISHER: Yeah, exactly. And so you'll get electric fields perpendicular to the coil sticking right into the brain like that. And what do you think happens if you stick a big, huge transient electric field-- boom!-- into your head like that. AUDIENCE: Isn't that a magnetic field [INAUDIBLE]?? NANCY KANWISHER: Yeah, you're right. Right-hand rule is magnetic field. I was thinking I was misremembering, right? Electric current makes magnetic field, right? It was a long time ago I took 8.02. I did-- just a long time ago. Anyway, for current purposes, doesn't matter. Either would do it. Actually, there's a variant of this where it's an electric field, but it's debated how well that works. OK, anyway, what happens is you affect neural activity in tissue right underneath the skull, right? OK, so if you want to see a picture, a video of that happening, there's a video of me getting zapped with TMS on my website. You can check it out. It's kind of ludicrous. Yes, question? AUDIENCE: What's the spatial resolution? NANCY KANWISHER: Oh, we're getting there. We're getting there. OK, here's an early version of this. To generate these very strong and brief magnetic fields, they had these stacks of coils like this, and they rotated them around. It's a little crazy. Here's a more recent version. It looks like a big torture device, but it's actually no big deal. The guy's just holding his head on a chin rest to hold his head still, and there's a person holding the coil next to his head like that. And so that enables us to briefly and somewhat selectively disrupt a little patch of cortex there by sticking in this big random field. Now, spatial resolution is not amazing-- maybe 1, 2 centimeters, something like that, OK? It's better than you might guess for such an incredibly crude device-- like something people would have done hundreds years ago, and yet we still do it today. You can also use a lovely method where you scan the subject with functional MRI first, find a particular functional region that you're interested in in that person's brain-- remember, these things can vary in their exact location across subjects-- and then find a way to register externally on the scalp, what is the closest spot to that region you found in their brain previously with functional MRI? And stick the coil right there, and exactly titrate its location with reference to that brain image. So that makes this whole enterprise more worthwhile. So what can TMS tell us about face perception? Well, here's the problem. Here's my fusiform face area-- that guy right there. It's a few centimeters in from the scalp-- from the skull. So that's a drag. Unless we opened up my head, we can't reach it there with the TMS coil. Believe me, the first time I had a chance to use a TMS coil, the very first thing I did was stick the coil there, crank it to the max, and try to see what would happen. Not a damn thing happened. It was very disappointing. I knew lots of friends who tried the same thing. It was the most obvious thing. It just doesn't work; it's too medial. Yeah? Well, there was a question over here a moment ago? AUDIENCE: If you use TMS near someone's brainstem-- NANCY KANWISHER: Yeah, that wouldn't be so smart. Luckily, the brainstem is kind of deep in there. So if you were really stupid and stuck it down, I don't know, way in here, you might be able to cause trouble. But mostly, people don't stick it back there. And actually, the subjects won't let you anyway, because there is a lot of neck muscles, and it really hurts when you do TMS over muscles. And so if anybody had such a stupid ideas to try to zap the brainstem, the subject would probably object immediately before they got very far with it, because it would hurt. [LAUGHS] And you guys are all probably wondering, how safe is this? It's not totally clear. There have been lots of studies in animals-- [LAUGHTER] --where they zap a rabbit 100,000 times or something like that and say, well, rabbit seems fine, hops around. And the best they can do in animal studies. When I first used TMS around 20 years ago, I read a few basic safety studies, and I thought, god, I don't know. But I also realized that if you look at the papers, the initials of the subjects were the same as the authors. So I called them up, and I said, hey, tell me honestly, did you guys ever notice any ill effects from getting zapped? And the guy I talked to said, yeah, I've been zapped about 10,000 times, and I never noticed anything except for one thing. After a whole hour of getting zapped, it gave me a hell of a craving for ice cream. So I decided, OK, I can live with that. We got it through the human subjects committee, and we do-- not a lot, but some TMS in my lab. And I'm probably now been zapped at least as many times as that kind, and I guess you guys can judge for yourself. So you don't have the before condition, so it's a little hard. Anyway, as far as anybody can tell, it's perfectly safe. Yes? AUDIENCE: So there are some contraindications if you are prone to seizures or if you're on certain medications. NANCY KANWISHER: Yes, yes, yes. AUDIENCE: So if you ever sign up for a TMS study, read the fine print. NANCY KANWISHER: Good point, yep. OK, so back to this. It would be lovely to zap that guy, but it's too hard to reach. OK, so then, this guy David Pitcher came along, and he had a very good idea. And my 1970s synopsis of his idea-- paraphrasing still-- is if you can't zap the region you love, love the region you can. And so Pitcher said, hey, what about that other guy there? We haven't talked a lot about it. It's sometimes called the occipital face area. I think of it as a kind of crappy version of the FFA. It's kind of face-selective. It's not as face-selective. It's more variable, so it's not as fun to study, but it's there in most people. I have a damn fine one, I have to say-- many people do-- and it is right out there next to the scalp, just asking for it. Right OK, so here's what David Pitcher did. He gave subjects a-- you need a behavioral task, right? Because in this case, we're testing the causal role of a bit of brain on behavior. So we're going to measure behavior. And so what is our task? OK, so here's his task. Sorry, it's a little tiny, but this is one trial. Time is going this way. You present a face. There's a brief interval. You present another face. And the task is just, are those two faces same or different? It's your basic face perception task. But then, what you can do is you can zap the occipital face area at different time-- during presentation of that second face, and you can do it at different time intervals. Remember, its effect is very brief. The actual magnetic change is less than a millisecond. OK, so here's what David Pitcher found in that study. This is accuracy at the same different matching task when you stimulate the right occipital face area versus vertex-- that means you stick the coil up here, which is pretty far away from face regions. It's a control condition-- not a perfect one, but better than nothing. By the way, TMS usually doesn't hurt unless you stick it over muscles. You stick it over the frontal lobes, and-- I don't know. Every time I try to disrupt my language abilities, it hurts too much, because there are muscles up there. But most places, like the top of the head, there aren't muscles, and it doesn't really hurt. But it still makes a loud cracking noise, and it's kind of like somebody went-- [TAPS SKULL] --like that. So you might imagine you need a control condition, right? Because if people are-- that also has a TMS pulse, right? If you bang somebody on the head when they're trying to do a task, you probably disrupt their performance. So you need to bang them somewhere else to see if it's specific to that location. OK, so OK, so here's a little effective on the accuracy. It's not a huge effect size. So here, it's going from 85% correct to 78% correct when you zap occipital face area compared to vertex. Everybody gets what's going on here? So that's good. That tells us something. Zapping here messes up face perception more than zapping here, OK? OK, so that tells us something about causal role, but what else would you want to know? That's a beginning, but having just learned what I told you about TMS, what else could you do that would tell you more? Yeah. AUDIENCE: You see the face [INAUDIBLE].. NANCY KANWISHER: Ah, well, that's a good question. It wasn't what I was fishing for, but it's a very good point. So this shows disruption, but I showed you with that video before that if you electrically stimulate the FFA, you see a face. Well, unfortunately, nobody has reported that when you zap a face area, you see a percept of a face. Boy, that would be fun if true, but it doesn't work. And there's much debate about why. It probably has to do with the fact that your ability to target just that region is less good than it is with direct stimulation. There are many reports and many published studies where if you zap V1, you see a flash of light, OK? I don't see the damn flash of light. I've tried, and tried, and tried, and people in my lab who I trust promised me they actually see it. It isn't just BS. But I don't know; I don't see it. Anyway, so probably, the question of when you get disruption and when you get a positive percept is a very interesting, complicated one. I think it will ultimately have to do with how those batches of neurons not only respond to faces, or light, or whatever, but how they code for that information, such that when you put a big artifactual, non-biological signal in there, will it have any meaning that the subject can interpret? I don't know if that's helpful. I think nobody really understands that, when you get a positive percept. But I hope you can at least understand that at least if you mess with it and muck it up, you can disrupt. That logic is clear. When you will be able to actually stick in a signal and get a positive, coherent percept is a more subtle thing, OK? OK, what else would you want to know? Yes. AUDIENCE: Whether it messes up object perception, or not? NANCY KANWISHER: Absolutely, absolutely. All we're showing here is it's messing up face perception. Maybe the guy can't see here. Maybe he's just globally blind. Maybe he'd have the same problem with object perception, absolutely. The assigned reading for Wednesday shows exactly that experiment, OK? What else would you want to know? Remember, a TMS pulse lasts less than a millisecond. That enables us to ask a whole interesting kind of question. What else could we find out? Yeah. AUDIENCE: Oh, just [INAUDIBLE]. NANCY KANWISHER: Yeah, yeah. I'm sorry. It's probably-- I don't mean to be insulting your intelligence. You're probably sitting there saying, this is too obvious. That's what I'm talking about. You can zap at different times and ask, when is the information going through there? When is that region playing a causal role in behavior? And here is a very beautiful data that David got. And there's basically no effect at any point other than that interval between 60 and 100 milliseconds, OK? So that's cool. Tells you that's when that region is likely engaged in processing. Make sense? OK, and is it Shardul? Yes, already made the point. I was going to ask you guys, does this tell us this region is specifically involved in face perception? Absolutely not. We'd have to test other things. It could affect every visual percept. OK, so you can read more about that. All right, just so to collect all the advantages, it gives you strong causal evidence that a particular part of the brain is involved in perception or behavior. It has good temporal information, unlike studying patients with focal brain damage. And it is the only disruption method that can be used in normal humans, OK? So that's why, even though it's so crude and rudimentary, we use it, because it's the only thing that fills that niche. A couple other unimportant things. Spatial resolution isn't as good as we'd like, but it's surprising how much you can learn nonetheless. And it doesn't reach very far below the scalp, although Ed Boyden-- the amazing Ed Boyden-- is working on a crazy new version of it that might. OK, so where have all this menagerie of methods gotten us to? I won't go through all this in detail. We listed all these questions before. I gave you some of the answers from previous methods. The ones we've just talked about show, for example, that the fusiform face area-- or the occipital face area. In the case of TMS-- are causally involved in face perception, apparently not in object perception, pending the paper you're going to read. And so that's important, because it says when we try to come up with theories of how face recognition works, we might think about having a different theory for face recognition from our theory of object recognition. OK, so this is all magnificent and wonderful, but I finessed this list of questions so that the methods would be able to address them-- at least a little bit-- and I sneakily left off a whole suite of other questions that are extremely important-- arguably more important-- that those methods don't address, OK? So we want to know not just that a region responds to faces. We want to know exactly what is represented in that region or other regions that respond to other things. We want to know, what is the neural code for faces? We want to know, what are the actual computations that go on in a given region, how do they unfold over time, and how do those computations produce the representations and behavioral abilities that we measure? We want to know, what are the actual anatomical connections? I showed you that little occipital face area right nearby but discontiguous from the fusiform face area. I've wanted to know for 20 years whether those damn things are connected anatomically. Shockingly, we still don't know that. We want to know what is the causal role of each region in perception. And I showed you a few ways that we get little bits of data-- kind of, sort of-- but there's a lot of cases where we don't. And we want to know, how does all this stuff get wired up over development, right? What is the role of experience? Do you need to see faces to wire up the phase region, or is it there at birth before you ever see a face? The sad truth is, for the most part, we don't have good methods to answer these questions in humans. So that's just a big bummer, but it's true. Most of these questions can only be answered by research in animals, or can be best answered by research in animals. So I'm going to take a moment to talk about ethical issues in animal research, just to note that I think there is an issue. And I'll say that it's not unreasonable if you have qualms. I noticed in an earlier lecture, I started talking about recording from animal brains, and I didn't have time at that moment to mark this, but I do think it's important. If it makes you uneasy, that's totally legitimate. You should think about that, and respect that, and think hard about whether that's-- what you make of that. Unambiguously, causing animals pointless suffering is just completely unacceptable, OK? So I think we can all agree on that. And I think there's a very difficult trade-off between avoiding suffering in animals and research that has saved countless lives. So people can legitimately come down on different sides of this, but many lives have been saved-- including mine-- based on animal research that enabled treatments that were life-saving. And a few things to think about to help you inform how you handle that trade-off. First of all, know that animal research in the United States is very heavily regulated, OK? So animals receive excellent vet care-- shockingly, better than probably lots of citizens of this country. That's another topic. Also, there's a very major emphasis on avoiding pain. So I think it's probably generally true that it's infrequent that lab animals suffer a lot of pain. Researchers and vets are very careful to avoid that. So the bigger issue is not so much are the animals physically suffering from pain, per se, but what kind of life is it to live in a lab and be a lab animal? And I think that's a legitimate question. For monkeys, at least, where I, at least-- and maybe in my speciesist bias, being more sympathetic to similar species. I don't know if that's legitimate, but it's a natural. There are increasing efforts to improve the quality of life for monkeys in labs. Many monkeys are now housed in social groups where they can hang out with their families, and that certainly improves their quality of life. Many monkeys basically play video games all day. In DiCarlo Lab, they're studying visual perception, and what do they do? They get the monkeys in there basically doing visual tasks in exchange for juice rewards. Not all that different from what, probably, lots of you guys do. Now, maybe they'd be happier in nature. Probably, much of the time, they'd be happier in nature. But I think that's complicated, too. Nature can be pretty nasty. So it's not totally obvious that quality of life in a random lab is worse than quality of life in nature. The third point, I'd say, is that the benefits of research are forever. You discover something major about how brains work, that's forever, right? So you've got to amortize whatever cost of animal suffering there is against the forever-ness of that insight. And so in my view-- not that you need to agree-- but in my view, animal research is vastly more justifiable than things like eating meat or buying leather, which is just transient entertainment or convenience, right? So anyway, you guys, I encourage you all to think hard about this and to come to different conclusions. I just wanted to note that these are issues that are worth thinking about. That said, the methods in animal research are breathtaking, and they get more and more breathtaking every day. In this building, people are constantly inventing astonishing new ways to answer all kinds of questions. And I wanted to give you just a gist of some of the kind of stuff that you can do to answer that list of questions that I said we can't really answer in humans. So just very briefly-- this used to be a whole lecture, but I've decided to cut it to one slide-- very briefly, about 10-plus years ago, these two amazing people, Doris Tsao and Winrich Freiwald-- who, mark my words, will get a Nobel Prize someday, or at least they should, and they might-- they popped a monkey in the scanner and did the very same experiment that we do on humans, OK? So here's a monkey brain. Again, the cortex is unfolded so you can see the whole surface. The dark bits are the bits that used to be inside a fold. The little yellow patches are the patches that respond more to faces than objects-- just analogous to the FFA in humans, but there are six little patches in monkeys. OK, so that's so far, that's like, OK, fine, monkeys have them, too. That's cool. But the thing is, because that's a monkey, you can then stick electrodes straight into that region right there, and you can record from hundreds of neurons in that region. And you can record the response of each of those hundreds of neurons to hundreds or thousands of stimuli. You can characterize the neural code for faces in monkeys in a way that you just can't for humans. In fact, Doris now published a paper last year called "The Neural Code for Faces," based on a decade of this research. It's quite breathtaking. OK, second, you can watch those representations change, those neural population codes change over time. You can see, at one time point, what the code seems to be saying here, and then here, and then here, and you can watch that-- those codes-- change over time in each of those regions. And you can see different representations in each of those regions. It's quite breathtaking. You can answer this question of, what are the anatomical connections between these regions, with a whole bunch of different methods that I won't go through here. But you can actually answer what's connected to what. And what these guys have found is that all of those yellow face patches are connected to each other by long-range connections that go through the white matter underneath the gray matter. Those regions are not connected at all to the intervening other patches of cortex. So that set of six little regions is like a computational unit with different hubs that talk to each other. And you can see all that in monkeys in a way that we still don't know in humans. You can electrically stimulate, or disrupt with other methods, any one of those patches one at a time. You can disrupt them for 50 milliseconds here, 200 milliseconds there, whatever you like. And you can study this whole system over development. How does it change from shortly after birth to monkey adolescence? And you can control experience during development. You can raise monkeys without ever letting them see faces, and ask whether seeing a face is necessary for the development of that region. We'll talk about that in a few lectures. My point is just that with animal research you can answer vastly richer, more sophisticated questions than you could ever answer in humans, and that's just life. Yes, what's your name? AUDIENCE: I'm Esther. NANCY KANWISHER: Esther, hi. AUDIENCE: So in these experiments, they showed them monkey faces, right? Not humans? NANCY KANWISHER: Done all different ways. Remember, monkeys see other monkeys, but they see a lot of humans, too. And monkey face patches respond pretty similarly to human faces and monkey faces. Human faces respond pretty similarly to human faces and monkey faces, too-- even if you don't work in a monkey lab. OK, so just to say that there are loads of other methods, and we'll get these later in the course. OK, so that snake assignment, I hope that seemed-- I thought you guys, for the most part, did very well and did exactly the kind of things that we had in mind. And I just want to go through a few bits of terminology, because I realized, some of you who messed up the wording, I hadn't really fully explained what the different words mean. OK, so first of all, there's this incredibly boring words of independent variables and dependent variables. And frankly, I didn't know which was which until I started teaching this stuff a few years ago. But the concept is really important. An independent variable, that's a factor that you, the experimentalist, manipulate and change, so that you can then measure what effect it has on a brain or behavior. The effect you measure is the dependent variable. The independent variable is called the independent variable because you, the experimentalist, get to mess with it, get to manipulate it, OK? The dependent one, you're measuring its dependence on the independent one. So just basically, in the experiment, you muck with something in the world, and you measure the consequences. The thing you muck with is the independent variable. The muckee, the thing you measure the effect on, is the dependent variable. Make sense? OK. All right, so for example, the bold response, that's a dependent variable, and pretty much all the experiments we'll talk about here. All right, the hypothesis, most of you got that. The hypothesis is the statement about the world that you're trying to figure out if it's true in your experiment, OK? A prediction-- most of you got this, but let me just say, a prediction is supposed to be extremely precise. It's the exact statement of what you will see when you measure your dependent variable if the hypothesis is true. What is the crucial thing you have to look for in the data you measure that tells you if the hypothesis is true or not? And the prediction is what you will find if the hypothesis is true, OK? Confound-- we haven't talked about this yet. A confound is a difference between your conditions that you're manipulating other than the one you intend to manipulate. And hence, confounds give you alternative accounts. Case in point, we compare the response in the brain when people look at faces versus when they look at a bunch of random objects. The fact that the faces have more curvy surfaces, or are animate, or are more interesting, those are all confounds with respect to the hypothesis that that region is responding specifically to faces. Everybody got that? OK, it's very common, amid undergraduates, to use confound to mean anything bad about an experiment. That's not right. A confound is a very particular thing. It's another dimension that co-varies with the thing that you care about. It's like a nuisance variable that's correlated with the thing you're manipulating, and hence is giving you a difficulty inferring a clean inference from your data. All right, a contrast. We talked about activations in the brain, like those little yellow patches I showed in monkey brains a moment ago. That shows you the bits that responded more in functional MRI when that monkey was looking at faces than objects. The contrast is faces versus objects, right? It's looking for a higher response in one condition than another. Make sense? OK, these should all be fairly clear. I just know that not everybody got this. OK, now, the point of a contrast is to isolate a mental process, OK? So let's talk about that for a second. So how do we decide what contrasts to use? OK, well, first thing you have to do is get clear about your hypothesis. State it explicitly. Most of you guys did that really well. Often, your hypothesis-- with functional MRI, at least-- will concern a particular mental process that you're studying-- like face recognition. Now, remember, importantly-- I said this briefly way back-- functional MRI can only tell you about differences between two conditions. The absolute number, you're going to measure the MR signal intensity in one condition-- say, when people are looking at faces-- and it's going to be something like 726. And it's totally meaningless. That's just how strong the MRI signal is from that point. It doesn't mean a damn thing on its own. But then, if we also measure, in that same part of the brain, the MR signal intensity when the subject is looking at objects, and it's 720, then now we're in business, OK? All right, so everything is a difference. So that means that in any imaging experiment, you'll need to compare two or more conditions. One condition will never get you anything. And if you want to isolate a particular mental process, you need to turn that mental process on or off, or you need to vary how strongly it's turned on. So this is all in the service of, how are we going to decide what contrast to use? That's our goal, is to turn on or off one little thing. OK, and here's the problem. If I told you, OK, look at my face, and don't process low-level visual information, and don't think about what I'm saying. Just see my face. It's like, what? You can't do that, right? There's a whole processing chain. You can't just do one little mental process at a time. And so that means we can't just have a task where you do only mental process x, and a task where you don't do mental process x, If you're not doing other stuff. So what that means is we need to choose two tasks, each of which has lots of mental processes, but that differ in only one. And then, we can compare those two. So this is called subtraction logic, and it comes from work over 100 years ago in cognitive psychology and people who were just measuring behavior. This dude, Donders, he's a Dutch physiologist, and he invented the subtraction method to measure reaction times in humans, way back. And so with functional MRI, we're doing the same thing. So we're going to come up with two different tasks which involve the whole suite, from input, to mental processing, to output. And yet, we're going to try to make them differ in just one particular mental process. Everybody with the program here? OK, all right, so what you aspire toward in the contrasts that you choose is something called a minimal pair, right? So the idea is we're going to have these two tasks that are identical in every respect, except for that one thing we care about, OK? So here's a task, and here's a task. This one involves snake perception, and this one is identical to this one, except for snake perception. That's what we want. OK, and if you get those two things, that's called a minimal pair. And this is the single most important thing in experimental design. All the other stuff-- like how you arrange your stimuli over time and all that kind of stuff-- OK, it matters a little bit, but this is the crux of the matter. What are those conditions, and are they the right kind of minimal pair? And you guys got the gist, but I felt like most of you didn't really engage. OK, what exactly were those non-snake conditions? So that's really the crux of the matter. So the most common problem with imaging experiments is not that the scanner wasn't as fancy as it could have been, or they didn't use the latest cutting-edge analysis method. The most common problem is that people's contrasts-- their conditions-- were not designed beautifully enough to isolate a single mental process, OK? That is that the conditions were not minimal pairs. Any other difference between the two conditions other than the one you intend is a confound. All right, so let's engage on this. Now, if we ran a whole experiment only on male subjects, is that a confound? No. Why not, Isabelle? AUDIENCE: Because it's not a difference between the two experimental conditions. NANCY KANWISHER: Yeah, it's just a bad design feature, or something that limits your ability to draw inferences. Again, sub-optimal design it's not the same as a confound. A confound is this very particular thing. OK, if all the snake pictures have grassy backgrounds, and all the non-snake conditions do not, is that a confound? Yeah, exactly a confound, right. OK, so I just said all this, so I'll stop boring you. OK, and the reason that the grassy background thing is a confound is it gives you an alternative account of that contrast. Maybe it's grassiness, not snake-ness, that's the key difference. You don't know. OK, all right. OK, so all of that said, minimum pairs are like a platonic ideal of experimental design. What you aspire toward, but you can never really do it. If the two conditions were identical except for this one little thing, they'd be identical. You can never totally pull it off, but you can track the little ways in which you fail, and you can test them one at a time in later experiments, OK? All right, good. All right, so here's what we're going to do. We're going to break into groups, and you guys are going to think how to take the kind of designs that you already put together and turn them into actual experiments-- which is going to require deciding on a whole bunch of other things, and then we're going to discuss the things you come up with. OK, what are the exact conditions you'll run in your experiment? So we could spend a whole class talking about this. So I'd love to hear your best-ofs, but I don't want to engage on that for a whole class. A lot of the keys, some of you guys had very clever non-snake conditions to test to get close to minimal pairs. I want to hear about those. But then, beyond that, here's something that probably none of you mentioned. It's understandable; I don't think I said much about it. What are subjects doing in the scanner? Are they just lying there, and the stimuli are just flashing up, and they're going dumdy-dumdy-dum? Are they doing something with the stimuli? Go think about what you would want to have happen, OK? So what is the task? Third, some of you mentioned baseline conditions but didn't really say what they are. What would a baseline condition be? And do you want them, or is it a waste of scan time? Think about that. OK, next, suppose you get to scan 10 subjects for one hour each. Now, think about how that design is actually going to go. Are you going to assign different conditions to different subjects-- so these five people will see all the snake images, and these five people will see all the non-snake images? Or, are you going to have snakes and non-snakes within each subject? Next, it's nice to not make the subject do their task non-stop for an hour. We usually give subjects breaks. So we break an experiment into pieces of 3 to 10 minutes-- or whatever I wrote, yeah. And so those are called runs. So think about how you want to allocate those conditions to runs. And how many runs will you include? And then, think about what's going to happen within each run. So if you're going to have multiple conditions within a run, are you going to stick all of the snake conditions in the first half and all the non-snake conditions in the second half? If not, why not? And if there are multiple conditions within a run, yeah, are you going to clump them all together or interleave them randomly, and what are the trade-offs there? And, what is the order of conditions within a run? And we won't get to number 10 for the moment. OK, so we're going to break you guys into four groups, and you're going to talk amongst yourselves and try to come up with your best answers to these in five, 10, minutes, something like that. And then, we're going to pull your thoughts on this when we get back, OK? OK, so part of my agenda in doing this is just to break up the monotony of me going blah, blah, blah, because experimental design is like, it's important, but it's not the most riveting thing. The other thing is, experimental design is basically just organized common sense. And so most of this stuff, you guys just answered all these questions just by thinking about them. You need to know a few things about the methods, but really, in experimental design, the biggest, the best guideline, the best way to think about design is think about, OK you're the subject. You're lying in the scanner. You're doing that. Does that work? Are you actually going to be doing what you're supposed to be doing? Are you going to be selectively turning on and off this one little mental process you care about, or are you doing a million other things, like falling asleep, and getting bored, and all of that, and predicting what's going to happen next, and all that kind of stuff? OK, all right, so let's just take a few examples. What were some good kinds of control conditions-- that is, non-snake stimuli that are good to compare to snakes that maybe aren't perfect minimal pairs, but that get partway there? I saw a few, just in the few papers that I looked at. Yeah, I've got a-- I'm sorry, I've asked your name like six times. But I'm going to-- on my trusty sheet, tell me again how you say it? AUDIENCE: Achay. NANCY KANWISHER: Achay. OK. AUDIENCE: So for ours, we compared snakes to worms. NANCY KANWISHER: To worms, yeah. AUDIENCE: Because they have really similar shapes. NANCY KANWISHER: Awesome, and they're both animate. That's great, love it. What else? Who else had a good control condition? Or who had an interest in control condition? Yes, sorry, your name is-- AUDIENCE: Lauren. NANCY KANWISHER: Yes, OK. AUDIENCE: Yep, our group had pretty much the same baseline background, and we would just superimpose images of different objects on it so that remained consistent throughout. NANCY KANWISHER: Uh-huh, and the background was like what? AUDIENCE: Forest floor. NANCY KANWISHER: Uh-huh, OK. So you stick a toaster on the forest floor or something like that, versus a snake or something, yeah? AUDIENCE: The idea was more like other animals, or stuff that would make more sense. NANCY KANWISHER: That's good. That deals with the grass confound problem, right? Absolutely, very good. What else? David, you had interesting ideas. AUDIENCE: Well, we were talking a lot about animate versus inanimate things, so like comparing to a garden hose. NANCY KANWISHER: Yes, garden hose! Love it! I actually ran this experiment a bunch of years ago, and we used a garden hose-- or a bunch of garden hoses, coiled up in the grass. We tried to make them slither and all that. Anyway, but garden hose is great. Say more. You had other good ideas in your-- AUDIENCE: Yeah, we also-- we talked about some of my ideas were looking at videos, with motion. NANCY KANWISHER: Why? AUDIENCE: What'd you say? NANCY KANWISHER: Why? AUDIENCE: Oh, because when you get a snake, it kind of slithers and has this very distinctive thing, that it feels like the motion is what creeps me out when I see a snake. NANCY KANWISHER: Totally. Me, too. AUDIENCE: And if you have a rigid thing that looked like a snake, but it was just sliding rigidly, then it wouldn't really creep me out. NANCY KANWISHER: Exactly. This is a key insight, right? So think about if we're interested in how you perceive snakes, we want to know not just how you do it in some weird lab environment. We want to know how you'd actually do that. The whole reason to choose snakes is it seems like something that could be biologically relevant. There might be special hardware. When I'm out hiking and I see even a curved stick, I, like, jump and shriek before I can censor myself. It's horrible. I find it very embarrassing. It's not consistent with my self-image. But I have no control over it; it just happens. And so I've thought for a long time, there's some damn bit of my brain that's making me do that, and it pisses me off, and I'm going to find it. Well, we looked and didn't find it. But anyway, you go from those intuitions. Often, your own introspections are very informative, and I think your intuition is exactly right. There's a very characteristic motion that snakes have, and it could be that that's the cue. So then, the trick is you have slithery motion versus what? I don't know. That's hard. What other kinds of motions could you have? AUDIENCE: Right, so you could have just even rigid motion, where it's not slithering, or it's not changing shape. It's just sliding or rotating. NANCY KANWISHER: Right. Right, exactly. Anyway, all those are all good ideas. Good, so what should the subject do in the scanner? Should they lie there and go dumdy-dumdy-dum? Should they do a task? If so, what task? Oh, if you guys don't volunteer, I'm going to start calling on people-- even though, as the Jenkins study, showed it's nearly impossible to look at these damn photographs and figure out who's who. David, in the back. AUDIENCE: So-- NANCY KANWISHER: Task. Task, or no task? What task? AUDIENCE: Yes, we talked about having the subjects find a way to indicate that they're paying attention and not just dozing off. NANCY KANWISHER: Right. AUDIENCE: So one idea was to have them essentially indicate the source, make [INAUDIBLE] think about a bunch of problems like, well, we don't want them thinking about snakes for the entire our experiment. So-- NANCY KANWISHER: Also, if they're going to tell you they're seeing a snake, maybe by pushing a button, then on the snake trails, they're pushing a button, and on the non-snake trials, they're not. AUDIENCE: Well, on the non-snake trials, they would push another button. NANCY KANWISHER: Ah, but then they have two different motor responses. AUDIENCE: Yeah, so then we would have them run the experiment again, but switch buttons. NANCY KANWISHER: Good, good. Smart, very nice. AUDIENCE: Ideally, we might just have them perform a task that's completely unrelated to looking at snakes or thinking about snakes, just so that they're not affecting-- NANCY KANWISHER: Absolutely, and the point you made before is a good one. If you're looking for snakes all the time, maybe even if it's apples and dogs, you're thinking, is it a snake? Is it a snake? And maybe you're using that region, and it's a mess, right? Absolutely, yeah. All right, so this is a common challenge in experimental design, and these things don't have clear right answers. What I want you to do is just see the trade-offs. On the one hand, just passive viewing, lying there, is good in a way. The things are just impinging on your sensoria, and it's doing whatever it will do. But the downside is subjects fall asleep and get bored, and you don't know if they're awake. So that's a problem. OK, but the key thing is whatever the task is, you don't want the task to engage asymmetrically with the stimulus condition, because then you're building in a confound, right? So in the group I was in, we were talking about, well, you could have people-- well, we were talking, actually, about faces and objects in that case. So you could have people name the things, but if they're naming snakes versus non-snakes, it's not very good if they're going snake, snake, snake, snake, snake, dog, toaster, apple. One is easier than the other and more repetitive. There are all kinds of problems there. All right, baseline conditions. I didn't really say what a baseline condition was. Sorry about that. What I meant by a baseline is different from a control condition. The control condition would be like non-snakes contrasted with the snakes. Baseline tends to be like a minimalist condition that's supposed to turn the brain off. Can we turn the brain off? No, of course not, but we can aspire toward it. We can go partway out there. We can say, OK, if we're studying vision, let's minimize activity in the visual system as best we can, OK? So you could just have a blank screen that feels like a pretty minimal thing. You can have people close their eyes. The reason that, in vision experiments, people tend to have fixation, where there's a tiny dot and subjects are supposed to hold their eyes on it, is that in natural- left of their own devices, people move their eyes a lot-- several times a second. And moving your eyes produces all kinds of activity and lots of neurons. And so it's a very active visual thing, even if there's nothing on the screen. And so staring at dot is closer to shutting off your visual system, even though it's not shutting it off. OK, so given that most of the contrasts we've talked about are like faces versus objects or snakes versus non-snakes, and all the activations that I've shown you guys are contrast between an experimental condition and a control condition, why are we bothering with baseline? It doesn't even figure in that contrast. Yes, Jimmy? AUDIENCE: Well, if the region is truly selective for only snakes, you could use the baseline as, in this sense, like a control, because you can compare the other control to it. If it's really selective for snakes, then the non-snake object should respond in the same as the [? minimal. ?] NANCY KANWISHER: Awesome, everybody get that? So that was exactly right. And this is, I think, a very interesting point. So suppose we have-- remember, with MRI, you just have two numbers. So here's the snake response, and here's the non-snake response. If we don't have a baseline, that's all we have-- two numbers, OK? And that's fine. If we run enough subjects, that could be significant. But now, let's think what else we know if we have a baseline. Suppose we have a baseline of staring at dot. And that's down here. We'll call that fixation. Are you impressed? And you've run enough subjects, so that's significantly different. Are you impressed? Yeah. AUDIENCE: Less so than if the fixation were higher up. NANCY KANWISHER: Exactly! Why? AUDIENCE: Because then, if the results are higher up-- or if the fixation is the second one that you just drew-- then the response to a snake is twice as much as non-snake. NANCY KANWISHER: Exactly. Does everybody see how-- yeah, that might be significant, but who cares, right? Some tiny little ratty-ass effect, versus if it's like here, or even-- this is the case Jimmy was talking about, like that. No response at all more than staring at a dot to the non-snakes, and yet this response to the snakes. That would even more impressive. So there are different degrees of selectivity, right? Not just does it respond differentially, but how selective is it? Oh, boy, I'm going way over time. I'm sorry. So you guys did great thinking through these things. And, of course, I didn't get halfway through my lecture. That's OK, we'll roll over the best parts for later, and the ones that aren't that fun will just go by the wayside. I will put notes on the rest of some of these things, but I think all of you guys pulled out-- just thinking hard about it and using common sense, you can see that a lot of experimental design is common sense. All right, see you guys on Wednesday. |
MIT_913_The_Human_Brain_Spring_2019 | 11_Development_Nature_Nurture_II_2018.txt | [LOGO SOUNDS] NANCY KANWISHER: So I'm doing another one of these big mongo lectures that takes a whole week, so this is really a continuation of last time. This is the outline for the whole week. We got through most of the stuff on face perception. I'll do some more today. We're right there. And we're going to go on and consider this question of, what's innate, and how do you wire up brains? So first, a brief recap of main points from last time. What, if anything, is innate about face perception? We considered lots of different kinds of evidence, behavioral and neural. And the bottom line is, maybe not that much. So there's a few things that are sort of suggestive, like newborns have this bias to look at faces more than other non-face stimuli that are pretty similar-- schematic faces versus scrambled schematic faces. And that's suggestive. But then there's the possibility that that's just due to some very, very simple property of those stimuli, namely just having more junk on the top than the bottom, like eyes on the top than bottom. So what would have to be innate in that case would be just the simplest possible template, not even a whole face. Similarly, we showed that there's actually very good discrimination of one face from another, even across viewpoint changes in newborn humans, and also in monkeys that were raised without ever being allowed to see faces. And both of those things suggest innate abilities to process faces, but in both cases, it's possible to argue that that ability isn't due to face mechanisms in particular. It's due to just general vision and shape perception. Third, I showed you beautiful recent data showing that the face patches in monkeys don't develop if monkeys are reared without ever seeing faces. Which also suggests that maybe not that much is innate. So all that is fine, but then there's a big, wide open question that's left unanswered by all of that, which is, how do the face areas know to land right there in everybody, robustly? That really feels like something has to be innate about the brain, at least, to say where those things should go. OK, so one possibility that I'm sort of skipping over, because it's a whole little universe, and there isn't an answer yet-- people are working on it right now, people in this building are working on it right now, but the gist of the idea is that maybe what's innate is some other kind of simpler selectivity. Maybe like selectivity for curved things. Remember how I talked about, as you go up the visual system, you start with selectivity for spots of light and then edges? Well, maybe up there, you're born with selectivity for curved things, or something like that, that is face-like enough that somehow that leads face selectivity to land there later. It's kind of vague because nobody really knows, but that's an idea. Another possibility that we'll talk more about in a moment is a possibility that the reason your face patches land right there is something about the long-range structural connectivity of that region to the rest of the brain makes that the right place. And so all of this is very actively being investigated, and nobody knows the right answer here. Further, I just want to mention that deep net modeling is just very suddenly in the last year become a very powerful way to approach these same questions from a different angle. So with deep nets, you can ask, what do you need to build into a network to get it to produce face patches? So that's a way of asking, in principle, in a network where you can actually control everything about its architecture and about the stimuli it sees, what are the necessary conditions for it to produce something like face patches? What do you have to train it on to get it to produce face patches, and to be able to recognize faces? And at the top level, why, computationally, does it make sense to have face patches in the first place? This is kind of the biggest question lurking in the background of this whole field. I'm describing all of these specialized mechanisms in mind and brain, but really, wouldn't it be nice to know why our minds and brains are organized that way, rather than just that they are? And that's a really hard question, and I think there's a real hope now that computational modeling may get us toward an answer sometime in the next decade, maybe even the next few years. OK, so that's the overview. I now want to go on quite a discussion about this notion that preexisting connectivity may be a major constraint in wiring up the brain. So first, we need to talk about, how would you look at structural connectivity in human brains? And I haven't really talked about this yet. The main method for being able to look-- for being able to get some sense of this in human brains is to use another kind of MRI imaging. Uses the same machine that's an MRI machine, but it's going to produce anatomical images that show us not those nice pretty pictures of brains that you're used to, but that show us the direction of water diffusion. And so the principle is pretty simple. Here is a picture of an optic tract. And what it's showing you is that if you see, an optic tract is a whole bunch of axons oriented like this connecting retinal ganglion cells to what? Where do the retinal ganglion cell axons land going through the optic tract? [INAUDIBLE] AUDIENCE: LOG? NANCY KANWISHER: LGN. LGN. Lateral geniculate nucleus of the thalamus. So there's that fiber bundle. But the main point for now is that you can see that each of those fibers has a layer of fat around it, and the upshot of all of that is that water likes to diffuse more in this direction than that direction. That's the key idea of diffusion imaging. It tells you which direction water is diffusing most. Water is constrained by the fat layers around those axons, that myelin. And so you get diffusion more in this direction than orthogonally to it. And so the details of the physics of this kind of imaging, which I'm totally not explaining, are such that what you get out is a picture at each point in the brain of what is the direction of maximum diffusion at that point. And so here's a little picture of lots of little vectors saying, at this point, water wants to diffuse this way, or this way, or this way, or this way. Everybody with me so far? So you get a whole bunch of little teeny vectors all through the brain showing you the orientation where water wants to diffuse at that point. And the idea is that's telling us which way fibers are going at that point. And we can therefore infer-- we can follow these things using a method called tractography, where we just follow those little vectors through the brain. And that's what's happened here. At each point in the brain, you start at one point, and you just follow these vectors and see where they go. Does that make sense, sort of intuitively? I'm skipping over lots of details, but I want you to get the gist. OK, so these beautiful pictures that you may have seen before are diffusion tractography. They show you our best guess of the long-range connections between one part of the brain and another based on diffusion tractography. And on the theory that you should wear your data whenever possible, here's mine from my lab. Whoops, I'm tangling it here. So-- I love these things, they're so beautiful. One of my post-docs who's our tractography whiz gave me this beautiful scarf. Isn't this nice? And so you can see even more clearly here that this is a cross-section through the brain in this axis right here. And so these big green guys are the connections that go from the back of the head down the temporal lobe, down the visual pathway that we've been talking about all along. OK, that was gratuitous. I just thought it was fun. OK, so tractography is cool. It makes gorgeous pictures and gorgeous scarves. And it works really well to discover big fiber bundles. There are lots of parts of the brain I showed you with that gross dissection picture last time, that there are big chunks of white matter where lots and lots of parallel fibers go like this. And tractography works well to find those. You can really see those very nicely with diffusion imaging. However, it's not so hot for discovering finer connections. It's better than nothing, but there's a lot of ways in which it fails. So for example, if you have water-- if you have fibers crossing in some part of the brain like this, you'll get diffusion in this direction and this direction, and the tractography algorithm will be finished. It won't know whether to keep going straight or whether to turn. So that's just one of many reasons why diffusion tractography is lovely, and wonderful, and the best we have in in-vivo brains, but it's not so great. Anyway, it's all we have, so we use it. OK. So we can use tractography to ask, for example, is the long-range connectivity of the fusiform face area distinct from the long-range connectivity of its neighbors? In other words, on this idea that that patch of cortex gets wired up to be a face area, somehow because of the connectivity to and from that region to other parts of the brain, then we should predict that that region should have different connectivity than neighboring cortex. Otherwise, connectivity isn't enough of a signature to tell us where to put a face area. I'm seeing blank looks. Is this not making sense? OK. Just butt in and ask questions if I'm not making sense. OK. So question is, do these connectivity fingerprints predict the location of functional regions, first in adults? If we don't see it in adults, then the jig's up. So let's start with adults. OK. So the way that you can do this is, for each voxel in the brain-- this is a big one, so you can see it. It would actually be a couple millimeters, wouldn't show on this picture. What you do is you follow that tractography and you say, oh, look, it went there, and it goes there, and it goes there. And you tally how often, when you start here, you land in each of a bunch of different big anatomical chunks of brain. That gives you a description of the connectivity fingerprint of that voxel. How strong is its connection to each of these other remote regions in the brain? That's what I mean by a connectivity fingerprint. So now the question is, can you use this connectivity fingerprint to predict what the function of that voxel is? That is, is the connectivity distinctive enough that, just based on diffusion data, we could say, what does that voxel do? If the fusiform face area has a whole distinctive connectivity fingerprint, then we should be able to predict it. Does this make sense? OK, so that's the question. And there's a lot of math, which I'll skip. I'll just give you the gist. So what we're trying to figure out is, is the fusiform face area distinct from its neighbors in its long-range connectivity? That's the question. And, in fact, it is. And we can show that. Again, I'm skipping over some details, but here is a recently-published paper that shows you in ways that should be familiar now, this is functional MRI activation for faces versus objects. Fusiform face area, that's probably occipital face area, another region we'll talk about later. The face patches. The usual face patches. Again, this is an inflated brain, so the dark bits are the bits that used to be folded up inside the sulcus until they were mathematically inflated. So that's the standard thing we've been looking at. This is the prediction based on diffusion tractography alone in the same subject about where the face patches should be. So very roughly, what you do is you take some other subjects, and you train them up on connectivity fingerprints-- it's kind of like NVPA, but you train from diffusion data, and you try to predict face selectivity. And then you take the diffusion data from a new subject, and you predict where that face selectivity should be, and there's where it's predicted for the same subject, and it's pretty damn good. Did everybody get the gist of what I just went through? You don't need to remember every detail. The key idea is, is there a systematic relationship between long-range connectivity of a voxel and its function, its selectivity? And this says yes for faces. OK? So that's the case for faces. That tells us that in adults, those face regions have distinct connectivity. This is the same thing. I just shrunk it so I could fit in other stuff. Here is doing the same thing for scenes. Functional selectivity PPA RSC, functional selectivity for scenes measured with functional MRI, predicted functional pattern from the same subject with just tractography alone. OK? Do you have a question? AUDIENCE: Oh, no. [INTERPOSING VOICES] NANCY KANWISHER: It's pretty good, isn't it? Yeah, yeah. No, I was dissing diffusion. You might be thinking, OK, I was dissing diffusion tractography. It sucks. It has all these problems. It has all these ambiguities. So how could it work so well? That's a good question. I don't know the answer to that. I think in part, it's because you're predicting based on all of these different connections. So even if half of them are wrong, you can still get some predictive power out of it. That's just my guess. OK? OK, so it works pretty well for scenes, and it works pretty well for body selectivity as well. Functional MRI prediction from connectivity. So that's cool. So that says, these all have distinct connectivity fingerprints, but now this is all done in adults. And remember, the way we got into this long shaggy dog story is to ask what these long-range connections, what role they might play in development. Remember that I said last time that most of the long-range connections of the brain are present at birth. So that suggests that maybe these connections are also there at birth. And it suggests that maybe indeed those connections could play a role in development. At least they're probably there. They're in a position to play that role, if that's actually what happens. So all of this brings us to the case of rewired ferrets. What? What am I talking about? They're cute, aren't they? They're also very good experimental animals to address just this question. And Mriganka Sur in this department did this very important paper a while back where he asked whether connectivity instructs functional development. That is, whether the connectivity present at birth is sufficient to determine the function of the region that has those connections. And he did this by manipulating connectivity. So if you want to ask, what is the causal role of x, you have to manipulate x. So we've talked a lot about this in this class. Functional MRI, wonderful. You see activity. You have no idea what its causal role is until you mess with it. For example, by electrically stimulating the brain. Similarly, connectivity may be present at birth, and may predict where we may be able to use it to predict where the functions land. It doesn't tell us that it's playing a causal role. The way to find out if it's playing a causal role is to change it and see what happens. And that's what Mriganka Sur and his colleagues did. So they used ferrets because they're born very prematurely. And so what that means is that you can operate on them surgically right at birth before they have any visual experience. They haven't opened their eyes yet. And you can-- turns out-- reroute some of the connectivity. OK, so this is a diagram of some bits that should be familiar. The retina going to the lateral geniculate nucleus and then up to V1. Also true in ferrets. In addition, we have primary auditory cortex that we'll talk more about in a few weeks. So just like V1, but for hearing. A1. A1 also goes to another nucleus in the thalamus. This one called the medial geniculate nucleus. And then it goes from there up through a complicated chain, eventually-- oh, sorry, it goes this way. Thalamus up to A1. So that's the basic wiring of an adult ferret. And so what Sur and his colleagues figured out how to do is redirect some of those connections by surgery at birth. So this is a wiring diagram of the same thing shown here. Retina, LGN. This is V1, it's also called 17. And here is medial geniculate and auditory cortex. And so what they did was to surgically knock out a few of these connections here in the just-born ferret pups. And what happens is if you knock out this connection here, the fibers that start this way get rerouted, and you end up with a ferret that's wired up like this. The important part of this is this rewired ferret has a connection between their retina and medial geniculate nucleus that goes to primary auditory cortex. So we're taking visual input at the periphery and wiring it up into the auditory system. And the point of all of this is now, primary auditory cortex in this developing ferret will be getting visual input. And so if the input were sufficient to determine the function of that region of cortex, then what should we find in these rewired ferrets? What should happen in what would have been primary auditory cortex? What should it do? Christine. AUDIENCE: [INAUDIBLE] visual-- NANCY KANWISHER: Yeah! It should behave like visual cortex, absolutely. If everything's determined by the inputs, we change the inputs, it should behave like visual cortex. Well, that would be freaking crazy, wouldn't it? I mean, it's miles away in the brain. It's a totally different part of brain. That will be nuts. But that's what happens. It's pretty amazing. This is a really important study. OK. All right. So what you find, first of all, is that primary auditory cortex in the rewired ferrets responds to visual input. That's cool. But you might say, OK, you wired visual input in there. Of course it's going to respond to visual input. So maybe that's not too cool, but not too surprising. But the next part is really cool and really surprising. Remember how I said that in normal visual cortex-- in humans and monkeys, and also ferrets-- you get these orientation columns. Now, remember, these are-- what this shows is that as you move across the cortex in V1-- we're now talking visual cortex here-- in visual cortex, in normal mammals, you get this smooth progression of orientation selectivity as you move across the cortex. And that's what's shown here. Everybody with the program? OK. So that's normal primary visual cortex in an adult animal. What do you think primary auditory cortex looks like in the rewired ferrets? Damn similar. So not only do you get visual responses in what would have been auditory cortex when you rewire, you get orientation columns. You get this really fine-grained structure of what everybody thought this was something about visual cortex. Well, this says that visual input is sufficient to produce orientation columns in a part of cortex that otherwise never would have had them. Does everybody see how mind-blowing this is? OK. So that's pretty cool, but now we get to the really cool question. When these neurons are active, does the ferret see, or do they hear? OK. It's rewired. It's getting input from the retina, but there's neurons in what would have been primary auditory cortex now responding to visual input. What does the ferret think is going on? Does he say, oh, that's sight, because he's learned that visual input means that's sight? Or does he say, I hear something, because that's auditory cortex. Everybody in the grip of what a cool question that is? OK. And so it could go either way. There's really no way to tell in advance. It depends on how you read out the information in that piece of cortex. When we do NVPA, we sit god-like by, and we look at a patch of brain, and we decode what's in there. But really, what's happening in the brain is some other part of the brain is getting input, and decoding, and interpreting it. And so the question is, what do later parts of the brain make of this? And the answer is the later parts of the brain learn that that's visual information, and the ferret reports seeing stuff, not hearing it. Now, you may be thinking, how the hell do you ask a ferret if he's seeing or hearing? What you do is you use non-rewired parts of the same ferret's brain. Actually, forget if it's the other hemisphere or a different part of the visual field that doesn't get rewired. So you have gold standard, where normal vision is working, and normal hearing is working in the ferret, and you train him, press this button when you see and press this button when you hear, and it's unambiguous. And then once he's trained, you stimulate those A1 neurons and you ask him what's going on, and he says he sees something. OK? All right, so this is one of the true classics. OK. So this means that A1 in this case, primary auditory cortex, is instructed by its connectivity and by the experience that comes through that connectivity to shape its function. Everybody got that? All right. So both experience and connectivity can determine cortical function, at least in ferrets. What? Yes, question. AUDIENCE: I have two questions. So first of all, what does their V1 look like after this rewiring, and also, can they hear things, and if so, where is it? NANCY KANWISHER: Yeah, absolutely. OK, so if you look at the diagram, there is additional-- well, actually, it's not in the diagram. But there is additional input that's not shown here. So they can hear things through maybe the other hemisphere, I forget. They can hear. And they can see, because notice-- that's right. OK, we blocked off area 17, but these guys are higher-level visual areas. So they can see both through their non-rewired hemisphere and through some other bypassing connections to other parts of visual cortex. Probably, both of those are going to be affected. Your vision is going to be different if you bypass V1. But there will be at least some visual information. OK, so that's ferrets. Again, animals, you can do invasive studies and really do the strong manipulation to do a strong test of a causal role, and this is a classic example. Of course, we can't rewire humans-- or we could, but it wouldn't be nice. But really, we want to know, how does all that stuff get wired up? Are these regions also-- is their function determined by their connectivity present at birth, and due to the experience of those regions have? OK. Well, we can't do controlled rearing studies in humans. We can't rewire their brains. But we can be clever and smart and think of other cases. So here's an important test case. The important test case is the case of reading. Why reading? Well, one, we all spend a lot of time doing it. And two, humans have only been reading for a few thousand years. And that's not long enough for natural selection to have crafted an innately-specified circuit just for reading. So that means that if we did find a patch of cortex that responds selectively to visually-presented words, or letters, that would suggest that for that case at least, experience was sufficient to wire up, to determine the function of that region of cortex. This is all very hypothetical. Everybody got the idea? OK. Now, notice, this does not apply to hearing words. People have been hearing words for hundreds of thousands of years, perhaps millions. And so that's plenty of time for special purpose circuitry, and that special purpose circuitry exists and we'll talk about it in a month or so. But now we're talking about the case of visual word recognition-- this recent cultural invention of humans. So that's why it's a special case, because we know that's too recent to be innate. And so if we find a selectivity, it can't be innate. All right? So that's what I just said. So do we have such a thing? Well, how would you test for it? What would you do? Joseph, what would you do? You want to know if there's-- AUDIENCE: I guess I would show them words, and then show them not words, and see-- NANCY KANWISHER: Yeah. It's not rocket science, guys. We just keep doing the same damn thing. Exactly. Right. So start by-- here's what we did. We showed people visually-presented words like that, and we showed them line drawings of objects. And when we did that, we found that in most subjects, there's a tiny little patch of the bottom of their left hemisphere right near the zones we've been talking about, near face selective and other regions on the bottom of the brain. But that tiny little patch responds significantly more to words than pictures. Now, we won't do this now, but you can do it as a thought experiment. What are the alternative accounts of that activation? Has this shown that that region is selectively involved in reading? Of course not. There's a million differences between-- oh, come on-- these and those. How bright they are, how big they are. It's a million differences. And so to get serious about it, we have to do the same game that we've been playing all along in this course. This is like a first whack at it. You find something, now we have a candidate. But if we want to get serious, we've got to test some other conditions to see if that's really for real. OK? All right. So here's what we did in my lab when we did this a while back. So first of all, this is left-out data. Once you find that region-- remember, if you're trying to characterize the function of a region, I talked briefly about this, a good way to do it is to run a localizer to find that region in each subject. Now we found it. Now we have those voxels. Now we collect some new data that may be a lot like our localizer. It doesn't matter. We collect some new data and we look at the response. And that just puts us on stronger statistical footing. OK. So here is time going this way. This is something called an event-related design, where you just present a single stimulus, and then wait, and another stimulus rather than a whole bunch of them mushed together in a block. And then you average over many, many repetitions. And so this is the response over time-- it's seconds, it's really slow-- to words and line drawings in that region. So this is just replicating what I showed you before. It's showing you what the actual selectivity looks like in the real data, not just in a significance map. Why is this thing taking six seconds to respond? This is stimulus onset out there. Yes. AUDIENCE: That's the time between blood flow? NANCY KANWISHER: Yeah. Remember, the signal we're looking at is based on blood flow. The neurons all fired right here, but it takes a while to get the blood flow to change. That's why it's delayed. Exactly. OK. All right. So what else are we going to test? Well, you can do lots of different things. We just tried lots of things. We said, OK, let's have other things that are symbols but that our subjects can't read. So we tried Chinese characters, low response. We tried digit strings. Pretty low response. That's pretty remarkable, because words and digit strings are pretty similar in how we use them and what they look like. So that's pretty good. We tried consonant strings, like this, that you can't pronounce. And we got the same response. And this is important. It tells us this region is not a word region. Instead, it's something about recognizing letters. But for the purposes of the current argument, that's OK. It's still something that has no basis in human evolution, and so if we find selectivity for letters that are presumably used in the process of reading, that must have come from experience. OK? What else did we do? OK, that's what I just said. Now, I submit that this is a pretty good argument that that region must have been wired up by experience. But you could niggle. You could say, well, there are more straight edges with the words and consonants. The digits are curvier, or whatever. You could make up some story about how that isn't necessarily selective for letters and words, and therefore, maybe it's not necessarily wired up by experience. Further, who knows? Maybe everybody just has that weird selectivity in there even if they never learned to read. So it would really be nice to make a stronger case. And what we did was we couldn't find people in Cambridge who couldn't read, who didn't have other things going on, but we could find people who did read Hebrew. And we had-- where's my Hebrew data? All right, hang on. OK, right. So here are our non-Hebrew readers. This is funny. This is an old graph. It's not so impressive-looking. This is-- I forgot to switch out our newer data. OK, so what we found is in people who don't read Hebrew, the response was lower to Hebrew than to words. Looks like it's almost as high, actually. When we ran more subjects, it's actually quite a bit lower. Nonetheless, when we ran people who read both English and Hebrew, the Hebrew response is higher. And that nails the case that it's actually that individual's experience that determines the selectivity of this region. It depends on what orthographies you know. If you know how to read Hebrew, you get a high response. If you don't, you get a lower response. Everybody get that this pretty much nails the case? OK, so where are we? All of this was to say, do we ever see selectivity in the brain that can't be innate? And I submit to you, this is selectivity in the brain that can't be innate, that has to be learned. And in fact, our data show that it depends on the subject's experience. OK. So-- good. So yes, we have such a thing. It's called the visual word form area. Now, what about this idea that connectivity of that region is playing a role-- it's in a very systematic location. It's that little orange thing right there. Yes, question. AUDIENCE: Question. I'm just trying to think through the alternative. The brain has to be shaped by experience, otherwise you would never learn anything, right? NANCY KANWISHER: Absolutely. AUDIENCE: Even if this didn't show that difference, it would just mean the difference is something you're not measuring. NANCY KANWISHER: Absolutely, absolutely. You wouldn't be able to understand the sentence I'm saying right now without changing your brain, because by the time you get to the end of the sentence, you need to remember what I said at the beginning of the sentence, so there's little things structurally wiggling around in your brain and changing synaptic connectivity online all the time or you wouldn't be able to think, let alone remember. Absolutely. So the question here is more specific. It's not whether the brain changes with experience. Absolutely, it does. It's whether experience can explain these particular cell activities and where they came from. I'm glad you asked that question. OK. OK, so now, we've just argued that the selectivity of that little dot, at least, must be due to experience. Doesn't tell us about the others, but tells us that one must be. And now we're asking, can its selectivity-- can that location be determined by the connectivity of that region? So to get to that, we use diffusion tractography. And the hypothesis here is that it's these long-range connections that determine where those functional regions land. This is me with a bunch of functional regions in my head. Doesn't matter which ones. We're just asking the general question. And so I'm going to skip over all the details, but just give you the gist of a recent paper that we published looking at this. We asked-- we found the visual word form area. That's right down in there, about there, left hemisphere. And we scanned kids at age eight and age five, same kids. Age five, then age eight. Here's the age eight data. These kids have learned to read in between the two scans. And here is the response of their visual word form area to words, faces, objects, and scrambled objects. Nice and selective, just like a good visual word form area should respond. So it's there by age eight. What we then do is we take the data in the same kid across those three years, align the data, and say, what were those voxels doing in that kid at age five before they learned to read? This is another way of showing that it's experience that was necessary. And boom, they were not word selective. They shouldn't be. These kids hadn't learned to read yet. But it's still kind of nice to be able to show that. All right? But now, the hypothesis is that it's the connectivity at age five that predicts where this region is going to land. So we use that same rigmarole that I showed you earlier for adults, where we used just diffusion data to predict where the functional region will arise. But we use the diffusion data from five-year-olds to predict where that region would arise when the kids were eight. And it turns out you can do that. You can predict actually fine-grained individual differences in exactly where the visual word form area will arise at age eight from that same kid's connectivity at age five. So does everybody see how that fits one of the necessary conditions for this idea that the locations where these things land later in development is determined by connectivity that exists before? Now, our study was done in humans, so we didn't have a causal test. All we can say is it was there before, and it's sufficient. But we don't know if that's actually how it worked. That's how it is working on humans. But if you put it together with the ferret data, it's pretty suggestive. All right? Yeah. AUDIENCE: Where is it connected to? NANCY KANWISHER: Ah. Very good question. I'm being very vague, connectivity. This is a long, complicated issue. Most likely, it's connected to language-y areas, which we'll talk about in a month or so, that are out on the lateral surface and up in the frontal lobe. There are papers claiming that it's connected to language-y areas. But I'm kind of a methodological hard ass, and I don't quite believe those data. I mean, I think they have a medium case, but they haven't nailed it. I've tried to nail it. It's hard for all of the reasons that this method that I was complaining about-- I'm complaining about it because I'm bitter about it. I want this method to be better. I want to know what those actual structural connections are. I wish we could put a seed in the visual word form area and follow those tracks and say not just there's enough of a fingerprint that we can predict its function, but here are the exact connections. And it's, mm, not quite up to that task, in my view. It's a big bummer. I've wasted a lot of the last year trying to get that method to work, and I haven't quite given up yet, but I'm close. It's OK. It's just not good enough to answer those questions, which is very frustrating because they're pressing questions. Yeah. AUDIENCE: Can I ask one more question? NANCY KANWISHER: Yeah. AUDIENCE: So people who are blind shouldn't have this region active. NANCY KANWISHER: Ooh, very interesting question. What do you think? People who are blind read. What do you think? AUDIENCE: So the connection between here and the visual system for the blind people goes from that region and touching, since they're-- I don't know. NANCY KANWISHER: Yeah. Yeah, it's not obvious. It's not obvious. There are several papers-- which I was going to put in this lecture and I just couldn't fit. But there are several papers that argue that tactile Braille reading in congenitally blind people activates that same region. They're pretty good papers. I sort of believe it. I have-- as I say, I'm a little bit of a hard ass, so I'm not 100% convinced, but they're pretty compelling, and it's a very interesting question. And it's a whole saga. It's so interesting. I'm going to try to incorporate more of this in a later lecture, because I didn't fit it in here. Yeah. And the idea would be, if you had to guess, what will those connections be that drive that? Certainly not visual input. They're not getting visual input. So it would have to be input from language-y regions or something like that, that would also be present in blind people. See what I mean? OK. All right. Anyway, all of this just to say that it looks like the visual word form area is kind of special in the human brain because, one, it shows us that at least one region gets its selectivity from experience, and two, because it develops later, it gave us this opportunity to ask if the connectivity was present before the function as a sort of weak test of this hypothesis that connectivity determines function. All right. Boom. All right, so where are we? This really is a shaggy dog story lecture. OK. So we started off by saying a lot of the basic structure of the brain is innate. Most of the neurons in your brain, you had at birth. Most of the long-range connections were present at birth. They weren't yet myelinated, but they were there. We've argued that some of these selective cortical regions appear to depend on experience. For example, the face-deprived monkeys don't have face patches. And the ferrets see the response of an auditory cortex when their auditory cortex has been rewired to get visual input. And further, I've argued that the visual word form area, the selectivity of that region can't be innate, and yet it arises at a consistent location, possibly because of these long-range connections of that region. So all of this looks very experiential, aside from the structural stuff that's present at birth. So is Kant toast? I started last lecture as saying he was reacting against the empiricist, saying not everything is derived from experience. We need to have a priori conditions of cognition. Remember, he said, "space can be given prior to all actual perceptions, and so exist in the mind a priori. And it can contain, prior to all experience, principles which determine the relations of these objects." So he's basically saying we have an innate representation of space. And I've just been giving you all this evidence for all the other cases that experience seems to be playing the major role. So is it all over for Kant? Well, actually, Kant was talking about space and time primarily, and we haven't considered that yet. So let's get back to space. Remember these spatial representations that I talked about in the rodent brain. Four different kinds of neurons that are present in adult rodents that play wonderfully different roles in navigation. Remember, there are place cells that fire only when the rodent is in a given known place in his environment. There are direction cells that fire only when the rodent is oriented in a given direction in his environment. There are border cells that fire only when the rodent is near a border of the space he's in, like right now, I have cells that are firing because I'm next to this border of this space that I'm in, and Anna does not have any of those cells firing because she's in the middle of this space. And there are grid cells that have this amazing property of firing in a hexagonal array of little micro place cells spaced evenly in a hexagonal array. OK, so all of this apparatus that I talked about last time that seems to be playing a role in your concept of where you are, where you're oriented, and the space around you, if we had to take some representation of space that Kant might have been talking about, this would be it. So is this stuff innate? Well, happily, all this work was done originally in rodents. All the most detailed work was done in rodents, so we can ask that question, because it's an animal. OK? All right. So what the Mosers and their colleagues-- the husband-wife team who got the Nobel Prize in 2014 for their work on the grid cells-- and O'Keefe and their colleagues in London, who discovered place cells in the first place-- two different groups simultaneously realized what a huge, big, fabulous question this was, and they both did the experiment at the same time, and they published it together at the same time about four years ago in-- I forget-- Science or Nature. Big event in the field. So they both realize the same thing. The way rodents grow up, they hang out in a dark nest. They're very premature at birth, and they can't really do much. They can't move around. All they can do is turn their head toward a nipple and suck milk. That's kind of it. And so there they are, in the nest, in the dark. Their eyes don't even open until the end of the second week of life. And at the same time, it's the first time they emerge from the nest, and the first time they have any experience navigating, any real experience of space. And so we can ask which of those cells are present, the very first experience. And it turns out that-- sorry, this is a little hard to see. There's a light yellow overlay. This is the window when they first open their eyes and leave the nest, between postnatal day 12 and 14, the end of the second week of life. And what you see is the head direction cells are present immediately, as soon as the-- can first collect neurophysiology data from these newborn rat pups. They're there right away. Place cells, you can get them pretty early, and grid cells soon after that. So this suggests that in the rodents, at least, their representation of space as entailed in the properties of these neurons is largely innate. So just like Kant said way back in the 1700s. Everybody get this? It's pretty cool. It's a rare opportunity where you can just take a huge, big philosophical question and, boom, answer it with data. Yeah. Awesome. OK. Yes. AUDIENCE: Wait, sorry-- NANCY KANWISHER: I'm sorry, is it Martin? Yeah. AUDIENCE: Sorry, are you saying that it's innate or that it's learned? NANCY KANWISHER: Innate. Innate. AUDIENCE: --takes time-- NANCY KANWISHER: Because-- oh, yeah. OK, important point. OK, we don't know before then whether they existed. They were in the nest. You can't really do neurophysiology on the rodents in the nest. The point is, none of the relevant experience has happened before then. They haven't opened their eyes, they haven't navigated. So none of the experience that could be relevant for navigation has happened before right here, on the very first time that you can test it, and the very first time that they could possibly be in the world, seeing the world, navigating, they have them. But what you point to is an important point. I mentioned this briefly last time, but it's really worth repeating. Innate-- I guess the word "innate" can be used different ways, but what I mean by innate here, the relevant part of innate, the content to the big questions, is whether it's specified at birth, not whether it exists at birth. Remember, I gave the case of puberty. Puberty happens way after birth, but it's not the result of experience. It's part of a genetic program. It's just going to happen. I mean, I guess if you don't eat anything, you'll die and then it won't happen, but within broad latitude, it's not the result of experience. And so you can have maturation on a biological autopilot that continues independent of experience, and that's the relevant kind of innate. I realize I was probably confusing. Innate for this purpose doesn't mean present at birth. It means determined at birth, essentially, independent of experience. Good. You guys are asking good questions and it's helping me be clearer. OK. OK, so that's cool. That says that those cells are all present very early on, and presumably independent of experience. What about re-orientation? Remember, re-orientation is this cool thing that I carried on for a long time about because it's so interesting. Reorientation is this particular aspect of the navigation system. It's been studied behaviorally in rodents, in young humans, and human adults. And lots of other animals, actually. And the key thing about reorientation is this is how an animal gets their bearing when they're disoriented. And the key finding is they use the shape of space around them. They don't use landmarks to reorient themselves. That's the key finding. This is all stuff I talked about before. And the evidence that animals use the shape of space to reorient is, when you have shown a rodent that there's goodies in that corner, the left side of the short wall, essentially, and then you disorient him and put him back in the box, he goes 50/50 to those two corners, showing that he's learned something like the food is on the left side of the short wall. Not in words, presumably, but some mental language that holds that information. OK, so that's using the shape of space for reorientation. Is that ability to use the shape of space-- this is a different sense of space than head direction cells, the shape of space around you-- is that present independent of experience? Well, again, we can't test that in humans because we can't deprive humans of experiencing the shape of space around them. Was there a question? No? OK, all right. But we can test it in animals with something called controlled rearing that I've talked about before. So again, we can't test this-- even in animals, it's hard to test at birth. Lots of animals can't navigate very well at birth, right? So we want to test them after birth, but we don't want them to have the relevant experience, because that's what we're asking, is would this ability be there even without the relevant experience. OK. So the answer to all of this, the way around this is to use controlled rearing. Just like Sugita did with the face-deprived monkeys, and just like our Carl also did with face-deprived monkeys-- the behavioral study and the functional MRI study. But this will be a controlled rearing study in a different organism, and it's pretty cute. It goes like this. This is a group in Italy that has a whole lab that uses this paradigm, and it's very, very powerful. So what they do is they-- again, I just said this. The whole idea is raise an animal without the relevant experience, figure out if the ability arises anyway. So in this case, what they do is they get fertilized eggs, chicken eggs from a local hatchery that's conveniently near their lab. They bring those fertilized eggs into the lab and put them in an incubator, and they hatch them in darkness. Then for the first few days, you get a nice little chicken. It's in the light here, but that's just so you can see it. It actually hatches in the darkness, so there's no visual experience. Then you put them in cages of different shapes. Either a nice rectangular shape like this that would be relevant for reorienting, or a circular space like that that has no geometric cues because it's symmetrical. So they spend their first three days of life in one or the other of those containers. You then, in order to get a behavioral result out of them, you have to use their natural behavior, which is that they imprint on mama bird. And you may know that imprinting is pretty non-specific. Baby birds will imprint on nearly anything that moves. So they take a big, red plastic object, and they dangle it in the middle of the cage, and little chicks follow the red object. That's mom. That's what they do. So then you can use that behavior to test their ability. And so you get them in the groove. You show them mom, and mom disappears behind an occluder. And then you let the chick go follow mom, which the chick wants to do. So they do a few trials like that. They've imprinted. They're going to follow mom. This gives us a way to ask the chick, where do you think mom is? And that gives us a way to ask, what cues are you using to reorient, even though you've been raised without geometric information. All right. And the thing I really love about this-- oh, I guess it's on a later slide-- is that after you do the whole experiment, you take one or two trials on that chick, you're done with that chick, they have the relevant experience, you give them back to the hatchery and the hatchery does their thing. So it's just like a really nice little symbiotic science-farming enterprise. OK, so here's actually what they do. So here's how the re-orientation test goes. After this chick is raised in one of those two environments-- the circular one with no geometric information, or the rectangular one with geometric information-- and they've learned to follow big red plastic mom, you then put the chick in this box here. The chick is in there in this wire mesh that holds them in there so he can't run around. He's in this rectangular space, and there are four symmetrical occluders in the corner. You then take the red object-- mom-- and hide it behind one of the blue panels in full view of the chick. So now the chick knows where mom is. Now you bring down an opaque cylinder around where the chick is. And while the opaque cylinder is down, you rotate the box 90 degrees. So now, the chick has no way to tell things are rotated, I'm disoriented, what's what, how do I know where to go. So this is reorientation in a newly-hatched chick that's been reared under controlled conditions. All right. So now, once you rotate the box, then you lift up the opaque occluder, and the cage, and you see where the chick goes. Everybody get this? It's a little bit convoluted. But it's just a version-- it's a chick version of the same reorientation task we've been talking about all along. You do 16 trials, and then you give the chick back to the hatchery. OK. So here's what happens for chicks that are raised in that rectangular cage. They have geometric experience during those first three days of life. So this is kind of a control case. And what you find is that when you've hid mom in a corner that is on the right side of the short wall, they go preferentially to the two corners consistent with that more than the other two corners, consistent with the idea that they can use geometric information to reorient themselves. They're not perfect, but they're way better than chance. Does that make sense? They go to the two corners that are consistent, showing that they can use the geometric information. But these are the chicks that were raised with the geometric experience. What about the chicks raised in the cylinder, without geometric experience? They do the same thing. And this is the first time they've experienced-- this testing condition is the first time they've experienced any space that isn't symmetrical, any place where they could possibly use geometric information to orient, and they do it on the first trials. Everybody got that? So that tells us that this ability to reorient based on the shape of space when you're disoriented doesn't require experience with the geometry of space. Now, you might be thinking, well, that cylindrical cage, it doesn't have something to break the symmetry, but there's still something geometric. There's a floor, there's a wall. I agree, that bugged me too. They did another experiment in which they raised the chicks in total darkness. First three days, no visual experience at all, and the chicks still do that. So no visual experience. That's an even stronger case. Was there a question percolating in here? I felt like-- no, OK. All right. So yes, the reorientation system-- actually, that's not well expressed. The ability to use geometry to reorient is not based on any experience with geometry. It must be innate in the sense of not requiring experience. So go Kant. All right. So where have we gotten to? Let's recap. What's innate? OK, in the face system-- I went through this before, maybe not that much. We could quibble some of the cases are ambiguous, but the main evidence suggests that-- before you posit that something's innate, it's like the evidence-- you have to have strong evidence for innateness to argue with. The default case is not innate, right? It's kind of an extreme claim, and so the default is not innate, and so right now, we don't have a strong argument that any of the face system is innate other than this bias to look more at faces, which as I said might be a very rudimentary template. OK. I talked about the role of connectivity and cortical development. Most of those long-range connections are present at birth. I showed that connectivity can causally affect development in the case of the rewired ferrets. I showed that category selective regions in human adults have distinctive connectivity. And I showed that in the visual word form area, the distinctive connectivity is present before the function. OK. So that tells us that there's one region in the brain that we know the selectivity of that region can't be innate. It doesn't tell us about all the others. Who knows? It's kind of an existence proof. They might all be learned by experience. We look at faces a lot. We look at scenes a lot. We look at bodies a lot. Maybe they all have the same experiential basis. Doesn't prove it. It just says maybe. All right. But then I showed that for the space system, actually, we do have pretty strong evidence that a lot of it is innate, both in that the head direction cells are present before any visual experience or any navigation. And I showed that the chicks can reorient based on the geometry of space, even if they've never seen space or geometry before. So bottom line, face system, who knows, but no strong evidence for innateness. Visual word form area, strong evidence that it's experientially based, and space system, strong evidence that a lot of it is innate. OK. All right. I got us to here. All right. Now, all of this time, I've been talking about, how do we wire up this system and its cognitive correlates in development? What do you have to build in to get a system like this in development? What can you get through learning? What do you have to build in, and so forth. But it's a related but different question to ask, is that the only possible way it could work, or are there situations where we might have a very different kind of organization of the brain? Are there other possible organizations that might develop under different circumstances that would still work? And the two relevant cases that people have looked at are cases of brain damage. So if you have brain damage in adulthood, and you lose a little piece, can that piece move over and reorganize? Is there another possible organization that would work? Or what about if you have very, very different visual experience, like you're born blind. Then do you get the same organization, or does everything go haywire and you have a totally different kind of brain organization? All right, so I'll give you a little bit of data on each of those questions. All right. So first of all, can the brain reorganize after brain damage? The main domain where people have studied this-- which we haven't talked about yet, but we will in a month-- is the case of language. So it's just something there are lots of studies of this. People have been onto this question for a long time. In fact, Broca wrote about this question 200 years ago. So the basic findings are that if you have damage to your language parts of your brain in adulthood, that is not good. Often, you'll recover a little bit of function, but you really won't get it back. It's just a big massive drag. There are people we will talk about in a month when we get to the language section who have had massive left hemisphere strokes that basically take out their entire language system. And it doesn't come back years after that stroke. We'll see, actually, that they're cognitively pretty normal in every other respect. It's quite amazing how much they can do without language, which is fascinating. But for present purposes, the main finding is brain damage in adulthood that takes out language functions, not good. Not much recovery, not much reorganization. By the way, there's a whole-- it's very trendy in popular media to talk about, oh, the brain is plastic, you can rewire your brain, take this-- use this smartphone app and rewire your brain. Mostly, that stuff is just bullshit. You can learn a task, and you can get better at that task, no question. But you can't make yourself smarter. You can't rewire your whole brain. That's garbage. All right. Back to aphasia. OK. The story is very different for brain damage in kids. If you have brain damage in the first few months of life to language parts of the brain, as an adult, your language function is pretty good. It's not quite perfect. Took people a while to discover that it isn't quite perfect, but it's surprisingly good. For everyday uses, you might not even notice. You have to test people on esoteric syntactic things to discover that, actually, it's not quite right. But it's very good. And typically, what you see, if you scan these kids, is that a lot of language function has reorganized and shifted over to homologous regions in the right hemisphere. OK, so that's better news. After age five, if you have brain damage, not so good. So it's like there's some critical period for when the brain is plastic. You can move language over to the right hemisphere up until around age five, and after that, you can't really. All right, so these consider-- right, that's what I just said. So these considerations have been pulled together under something called the Kennard Principle. And the Kennard Principle basically says, if you're going to have brain damage, have it early. Better not to have the brain damage, but if you have to have it, have it early. And that's based on findings like this-- the fact that the kids who have left hemisphere damage have much better language function as adults than adults who have the same kind of left hemisphere damage. OK, so that's a reasonable summary of the language literature. However, this finding doesn't always hold. And it has led others to put forth the Hebb Principle, which is sort of the opposite. The idea of the Hebb Principle is that, first of all, it depends. It depends on where the damage is. It depends on when you test after brain damage. But the key insight that will make this seem more sensible-- at first, you feel like it's very intuitive. Kids are more plastic in all kinds of ways, right? Watch me using a computer, it drives my students insane, I'm so slow. One of my students once-- back when I used to actually scan subjects, one of my students was watching me scan, and he's just getting more and more impatient, and he finally is like, it's like watching my mother. It's just like, you cannot become as fluent at things when you start doing it when you're 50. It's just what it is. We've all seen that manifest in various ways. OK, so that's generally true, and that's consistent with this Kennard principles that you have more flexibility when you're younger than older, which is also why you guys should learn lots of math and computer science now while your brains are still good at it. Don't wait until you're 40 when it's harder. You will need it. No matter what field you are in, you will need it, so do all of that now. OK. But to get back to the topic at hand, what is the idea behind the Hebb principle? The idea is, think about building a house. You can't build the first floor if you haven't built the foundation. Similarly, you might imagine that there are lots of aspects of cognition that are necessary precursors for other aspects of cognition. And if you're wiring up a whole brain, you're not going to develop those second order ones if you don't get the first order ones. And so if you have damage early in life, you may have bigger long-term consequences. Really concrete kind of silly example. Suppose you have damage to primary auditory cortex at birth, and you're deaf. Well, you're going to have a harder time learning language because you need to hear to get language. I mean, if you have smart parents, they'll teach you sign language, you'll be OK. But this is a necessary prior condition. And so more generally, it turns out that in a lot of domains, some aspects of brain and cognition are necessary precursors for others, and in those cases, the Kennard Principle doesn't hold. OK? Blah, blah, blah. OK, now let's get-- this is all sort of in-principle vague stuff. OK, what about visual cortex? What about all this stuff we've been talking about here? All of these specialized regions for different features and different categories, and you may notice I've now added visually-presented words on there. Remember, visually-presented, not auditorily. Auditory is a whole different thing. This is seeing words and letters. OK, so all of this organization, can this stuff move around? If you lose this thing, can you regrow it over there? Well, not really. As I've been talking about, if you have brain damage in adulthood, you basically lose the corresponding mental function. That's why we have all these neuropsychological syndromes. If people could relearn and just move the function over, you wouldn't have a syndrome. You might have a transient problem as you relearned. But in fact, if people get achromatopsia-- can't see color vision-- they're not going to get better, or not much. Agnosia, if they can't see shape, they're not going to get better. Akinetopsia, they can't see motion after a stroke in adulthood. They're not going to get better. Prosopagnosia, topographic disorientation, and alexia-- inability to read due to a stroke-- basically, people don't really recover from these things. There's a beautiful recent article by a German neuroscientist who had a stroke and couldn't read at-- I don't know-- age 50, 60, something like that. And so made himself an experimental subject, and was just determined to relearn to read. And he did every possible thing, and he's written about this very interestingly, and there's an article I can put on the website if anybody wants to read it. He basically retaught himself to read, but he's doing it in completely different ways from what all of you are doing. He doesn't have that bit. He didn't develop a new one of those. He developed a very different compensatory strategy that's very slow and doesn't work anywhere near as well as reading does for any of us. So basically, in adulthood, these things can't move around. So now, are we talking Kennard or are we talking Hebb? What happens if you get the damage in childhood? Well, I'm raising this question because I think it's big, and deep, and interesting, but there basically isn't much of an answer to it. It's hard to answer. I'll give you just a shred of data, but basically, I think we don't know the answer, and I'm dying to know the answer. I'll give you just the one paper that I know of that's relevant to this. This is a study from quite a while ago. It's the case of a patient who's known in the literature as Adam. And Adam sustained bilateral damage to his ventral visual pathway, both sides, at day one of age due to a stroke. Actually, strokes around birth are surprisingly common, like this happens. So this guy basically lost cortex in a lot of the regions that we've been talking about on the bottom of the brain that do high-level vision. OK, so he was tested for this study at age 16. Now, his visual acuity, his ability to see fine-grained stuff is not great, and his object recognition is not perfect, but it's not terrible either. He can recognize common objects from photographs and line drawings reasonably well. So he has some residual vision. But he can't recognize faces at all. So he is a fan of this TV series called Baywatch, which I don't know about. I don't know if that's like-- anyway, this study was done a long time ago. Anyway, some beach TV series that has the same set of characters, and he was obsessed with this, and he watched it for an hour every day for a year and a half. And that's just relevant because we know that he has lots of experience looking at these individuals. But when tested in the lab on pictures from Baywatch, he couldn't recognize any of the major protagonists. That's just a measure of how severely prosopagnosic he was. So that suggests that when the relevant parts of the brain, that the relevant parts are already specified at birth, and if you lose those parts, you can't just put that function somewhere else. So that suggests-- I'm not leaning too hard on this because there's just very little data. This is the best there is. So it suggests that those-- at least the general region is already specified. Can anybody think about why that might be? Why can't you just train up some other part of cortex? Say, his object recognition is pretty good. Why can't you train that part of the object recognition system and just say, OK, learn to do faces? Nobody knows the answer to this. Yes. AUDIENCE: I don't know about the [INAUDIBLE] it's gone completely, just maybe because throughout time very far back in evolution, it's a face region. NANCY KANWISHER: Yeah. Yes, but still-- yeah, I mean, it's clear that we have it, and we probably have it for some reason and all of that. But why couldn't you just grow a new one over in a different part of cortex? What's wrong with that other bit of cortex? What might it not have that you might need. [INAUDIBLE]? AUDIENCE: The right connection? NANCY KANWISHER: Yes! I just showed you guys that there are very distinctive connections. This is all speculation. Nobody knows why. I'm just saying that one guess is that the reason these things can't just take up residence someplace else is they need those particular connections to get the right input to process. OK, anyway, this is going way beyond the data. But in principle, people could get more data of this kind and answer this question. If I can find the relevant subjects, I'm aiming to do this. OK, so let's take one other case. Very different kind of change to ask, what happens-- so basically, bottom line of all of this is, stuff doesn't move around that much. Early brain damage to language regions, they can shift to the homologous regions in the right hemisphere. But all the other data that I know of suggests you can't just take anything and move it around a few centimeters over. At least if you have the damage in adulthood, and maybe even if you have it pretty early. OK, all right. So now we're going to say, OK, might this organization nonetheless be very different if you had very different experience? So let's take the case of congenital blindness. OK, so how is the brain organized in congenital blindness? Well, let's take V1. Here's this big chunk of cortex back here, nice big chunk of cortex that, in all of you guys, does vision. What does it do in congenitally blind people? Does it just sit there? Do the cells die out? Do they just go dum-dee-dum-dee-dum and they don't do anything? It's a lot of cortex to waste on all of that. Well, it turns out, astonishingly, that what visual cortex does in blind people is a whole bunch of other things, including, astonishingly, language. So you present a sentence to subjects through Braille or auditorily to blind subjects in the scanner, and you see activation of V1. Further, you might think, well, OK, whatever. Just turns on, it has nothing to do with anything. But TMS studies-- V1 is right near the surface of the brain. You can zap that region and ask if you're disrupting function, and you can interfere with language task by zapping V1 in congenitally blind people. So it's not just activated. It's doing causal work in blind people. This is mind-blowing. This is like a totally different patch of cortex. So yeah, it's hard to think of more different functions than low-level vision and high-level abstract language processing. So that suggests radical possible reorganization, in this case, with different experience. OK, what about those regions on the bottom surface of the brain? The face, place, word, and body regions that we've been talking about for so long. What do they do in blind people? Somebody already asked me before, maybe [INAUDIBLE].. Somebody over there. It's my spatial code. And there's a lot of claims that they have similar selectivity, which I'm not totally sure of, but let me show you one piece of data. I promised you that there were going to be further contradictions in the whole saga of the role of experience in wiring up these regions, so here's one more contradictory piece of data. OK, this is a paper that was published just a few months ago, and the title of the paper is that the development of visual category selectivity-- that means face place body regions, all that stuff-- in the ventral visual cortex does not require visual experience. OK. What? What, what, what? OK, here's what they did. They scanned-- pretty crazy experiment-- they scanned congenitally blind subjects while they heard sounds that were associated with faces, bodies, objects, and scenes. So for example, they might hear laughing, chewing, blowing a kiss, whistling sounds. Those are face-related sounds. Or they might hear scratching, hand-clapping, finger-snapping, bare footsteps, knuckle cracking. Those are body-related sounds, et cetera. So they're lying in the scanner hearing these sounds. Probably cracking up. Now the question is, do we see face, place, body, and object regions activated from sounds in congenitally blind people listening to those categories of sounds? And the crazy answer is, kind of sort of a little bit. It's not super strong. The data are not mind-blowing, but let me just show you what we have. OK, this is the bottom of the brain, back of the brain. Everybody oriented here? OK. Occipital lobe. This is where all the good stuff is that we've been talking about. OK. So this is now the sighted control subjects looking at visual stimuli. So this is a significant map, P levels. And so what you see is facial activity in red, object selectivity in green, scene selectivity in blue, purple, whatever that is-- blue. So that should look sort of familiar. Faces, lateral, scenes, medial. Objects, people debate about. I haven't talked about it much, because-- anyway, faces and scenes, so stuff to pay attention to. OK. And over here-- this map is the same. It just says, never mind if that voxel reaches statistical significance. Just plot what category that voxel responds most to. So you just see a big swath. All right. Now, what do we see for sighted controls listening to the auditory stimuli? Not much reaches significance. If you drop the threshold way down and look at this, maybe a little bit. These are somewhat correlated, but it's lousy. So sighted subjects listening to those sounds, not much. What do you think happens with blind subjects listening to those sounds? Well, you get face selectivity here that's statistically significant. And if you drop the threshold and look at the overall map, you see a resemblance of this map to the sighted map, the visual map in the sighted subjects, and this correlation is highly significant. So this is totally weird. It says, yes, there's a similar spatial layout on the brain of these same selectivities in congenitally blind subjects who never saw those stimuli. And that's the basis of their argument, that the development of visually category selectivity doesn't require experience. But now you may be thinking, what about that paper on face-deprived monkeys? The title of which is, "Seeing faces is necessary for face-domain formation," namely for face patches. So these two findings, these two claims in the titles are completely contradictory. So we're out of time. Nobody knows the answer to this. It's an ongoing puzzle. There are all kinds of possibilities. They're different species, they're different kinds of tests. There are many things you could say, but we're really right on the horn of a big conundrum in the field. And all I have to say is welcome to the cutting edge. It's a mess there. OK, thank you. |
MIT_913_The_Human_Brain_Spring_2019 | 13_Number.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: All right, so we're going to talk about number. I got a little carried away with the behavioral work on number because I just think it is so awesome. And I think it's, frankly, a little more interesting than a lot of the neural work. So this is going to be sort of a behaviorally heavy lecture. But let's start by thinking about why we have number and what we use it for. And the first thing to realize is that we use concepts of number and quantity like all the time. Most obviously, if you're, say, getting change at a store. I guess that doesn't really happen very much anymore. People are going to forget how to subtract because they just put their credit card or bump their phone or whatever they do. But anyway, it used to be that you handed over this stuff called money and that coins came back and that was the subtraction involved. We use it to tell time or to fail to tell time, as in my case this morning. To choose the larger of two objects, that's a continuous idea of quantity, not a discrete idea of number. To choose the shortest line at a grocery store, right, and all of those kinds of things are comparing quantities. And we also take these basic ideas of number and quantity and we build on them in modern societies to do all kinds of amazing things like engineering. Like all of modern science is highly quantitative, like all of computer science. And so these are really fundamental ideas. And animals, it turns out, are capable of mastering very simple but sophisticated understandings of number and even arithmetic computations. They can learn about order and number and quantity. OK, and they need to for lots of reasons. So here, just a brief overview of some of the situations where animals need concepts of number and quantity in the wild. Foraging, right, so animals spend a lot of time rooting around for food over here, rooting around for food over there, deciding when to keep rooting around here despite diminishing returns and go somewhere else where there's unknown pay off, unknown amounts of food. So that's a whole math of foraging behavior. OK, so that deals with the rate of return of the food at each location and the amount and the quality. And you can imagine a whole math to optimize the amount of food. They also need to know about number and quantity when they form teams, which many animals across taxa do in different ways. So schooling fish can quickly pick out the more numerous school of fish to join. And that's what they want to do because your statistics are better if they're a predator if you're in the larger school than the smaller school, right? So your chance of getting eaten is reduced just dividing by the number of options. And then there's all kinds of animals that take into account the size of groups of their own species or other species when making decisions about how far to run or who to chase or who to predate on or who's at risk of predating upon you. So lions hunt in teams. And they have to work together. They have actually whole strategic situation where different lions play different parts like a football game. And they have to decide which groups of predators to take on based on numerical advantage. And my favorite is the n plus one frog, the Tungara frog that lives in the rainforest in Puerto Rico. And it literally one ups other frogs, the males do in trying to impress the females. And so what happens is that one frog will start calling out. One male frog will start calling out trying to sound all hot to the gals. And then another frog will one up him by doing that call but elaborating on it by adding an extra call or an extra component. So for example-- [FROG CALL] OK, so that's one dude calling out. And not to be outdone, the next guy calls back. [FROG CALL] And apparently, if you follow these guys, they pretty systematically add one to the previous frog's call, right, up to a point. The point being approximately four. So it's not like 100 and 101. But it happens. OK, so that's just a broad overview of some of the cases that understandings of number and quantity arise in natural environments without training. So we want to know how is all this computed in the mind and brain. And so what are the foremost thinkers on this topic is Stan Dehaene, shown here. And he wrote in a very widely cited book, actually review article and book quite a while ago, 20 years ago, he said, animals, young infants, and adult humans possess a biologically determined, domain-specific representation of number. So this is a very kind of extreme, hardcore claim. We will see at the end of the lecture that he has backed off that claim. OK, so a couple of things, biologically determined, he's kind of implying innate, right? Domain-specific, I've avoided this phrase, for the most part, because it's kind of like jargon gobbledygook. But it's actually so entrenched in our field that it's worth knowing what it is. Domain-specific is just this idea of functional specificity that I've been talking about. But you can apply it to not just a piece of brain like, does this patch of brain process only faces? You can also apply it to a mental process even if you don't know what its actual brain basis is. So do we have special mental machinery for thinking about numbers that's distinct from our machinery for face recognition or navigation or language or whatever else? OK, so that's what domain-specific means. And it's worth knowing because you'll encounter it in other contexts. OK, so in more detail, Stan says, a specific neural substrate, located in the left intraparietal area, is associated with knowledge of numbers and their relations, which he defines as number sense. The number domain is a prime example where strong evidence points to an evolutionary endowment of abstract, domain-specific knowledge in the brain because there are parallels between number processing in animals and humans. Again, kind of hardcore claims. Not just is there this so he doesn't quite say innate, but he's strongly implying innate. I mean, that's evolutionary endowment, that basically means innate, right? It's an evolved ability that lives in a particular part of the brain. OK? So who would a thunk, right? Number, right? You think of number as something you get taught in school. But no, he's saying it's really part of your biological endowment. It has a particular brain region. And all of that may be if not completely independent, it may exist without explicit training. OK? So that's quite a claim. So what does number sense mean exactly? Well, what Stan and others in the field mean by number sense, it's a bunch of things. First of all, the idea that for human adults to have number sense, that means they can represent large numerical magnitudes without verbal counting, right? So counting is an interesting thing. But we're going to leave it aside for the moment. Number sense is a more general idea that's going to apply to animals and infants without explicit counting. OK, so you can have some way of representing that there's a lot of things here. And there's fewer things there. Second of all, these representations are approximate. And the ability to discriminate two of them depends on the ratio of those two, not the absolute difference. OK, and I'll show you in more detail what I mean by that. It's a deep fact about number sense and actually all of perception, pretty much. Further, the idea is that these representations are abstract. They're not just, say, a particular visual form. Like approximately 13 looks like this. No. They're going to generalize across modality, OK, and space and time. Next, these mental representations of number can be used in operations. Even without counting and being explicitly informed, you can add approximate numbers. You may be thinking, what the hell am I talking about? But I'll show you in a second. So for example, I'm going to show you two sets of dots next. And you're just going to shout out first if the first set of dots had more, if there were more dots in the first array and second if the second array had more dots. OK, ready? Here we go. Boom. Boom. Second. Duh. OK, let's try another one. Duh. And another one. Uh huh. Another one. I noticed the volume decreasing. And I noticed some hesitancy. Actually, I'm not sure about that one. OK, so how did you do that? What did you do? Did you go 1, 2, 3, 4, 5? No. I tried to do it, so there wasn't time to do that. How'd you do it? AUDIENCE: I kind of tried to see like the density, like how close all the dots were. NANCY KANWISHER: Mm-hmm. Mm-hmm. And did that work for you? Did that work OK? AUDIENCE: It seems to be OK. NANCY KANWISHER: OK, what Jack is pointing to is a really important thing in thinking about number, which is that number is confounded with area extent. How much total yellow stuff is on the screen? And it's confounded with density. And this is a big problem in people who want to do research on number. And so what they usually do is you can't totally unconfound those things. But you can unconfound them one at a time. So you can vary the size of the objects. And you can vary the density across trials. So no one of those cues will enable you to do it fully. This example isn't great that way because they were all the same size, right? OK, but so the point is, without explicitly counting, and God knows what you do it, how you do it, you just feel like you have a sense of roughly how many. Everybody got that sense? OK, so that's what we mean by number sense is that sense that you can just look at something and have a sense of roughly how many. Like you don't know if it's 19 or 18, but you know it's not 13, right? OK, right. Oh, and you guys all got quieter when the numbers got closer together. OK? It gets harder when the numbers are closer together. OK, so in experiments that have quantified this, lots of people have looked at this. Here's one that I was involved in way back. Just like you did, this is the task here that you guys just did. And here are some of the data we got. So let me walk you through this. This is accuracy on a bunch of different comparisons. 16 dots versus 32 dots, people are pretty much 100% correct. OK? This is just normal human adults. 16 versus 24, great. 16 versus 20, pretty good. 16 versus 18, now we're really dropping. 16 versus 17, forget it. Can't do it. OK? So performance falls off as the numbers get closer together. OK? So that's sort of intuitive. But now let's consider these are all comparing to 16. Here, we compare to eight. Eight versus 16. Eight versus 12. Eight versus 10. Eight versus nine. You see the same fall off as the numbers get closer together. OK? So far, so good. But now we can ask, what determines that fall off? Is it the absolute difference or the ratio? And the way we tell is we plot the ratio of those two curves, and we look at performance. And we see they are spot on top of each other. That tells us that it is not the absolute difference that determines your ability to do this but the ratio of the numbers of dots. OK? It's sort of intuitive, right? But it's amazing how clear the result is. Everybody get that? OK, so this is a really deep fundamental fact about perceiving approximate number. And it's actually, more generally, a fact about perception. It's called Weber's law. And it just means that the discriminability of, in this case, two numbers, two numerosities, depends on their ratio, not their absolute difference. The exact same thing holds for evaluating which of two stimuli is brighter, which of two objects is heavier, which of two sounds is louder. They all follow. The ability to do that is a function of the ratio of the, two not the absolute difference. Yeah? AUDIENCE: [INAUDIBLE] with the size of the dots? NANCY KANWISHER: So in this experiment, we varied the sizes every which way and the density. As I mentioned before, you can't completely unconfound both size and density within each trial. But across trials, you can muck them up. So you can ask whether people are doing it by size or by density. OK? And we did all that here. OK, so this is not shocking yet. It's just kind of a basic, deep, clear fact about whatever our mental representation of number is, that it's this approximate thing. It's pretty good. And its precision scales with the magnitude. OK, all right, so this has been quantified in lots and lots of experiments. And this is called the Approximate Number System, or ANS. And the standard test that's been used in lots of studies to measure people's kind of number acuity is a lot like what I just showed you. You show an array like this. And you say, are there more yellow dots or blue dots? And people very quickly say yellow, in this case. And then you ask for a case like this. And they're a little slower, right? And here, you can see that the sizes have changed and have been orthogonalized. OK, so the ratio of yellow to blue dots is called the Weber fraction, right? This is this idea of Weber's law that determines your accuracy from just that ratio. And so you can measure people's Weber fraction, their ability to do this task, their kind of number precision. And what you find is, first of all, that there's very big individual differences. OK? Now, this is interesting. It's like things that we've seen in other domains. There are very big individual differences in navigational ability. There are very big individual differences in face recognition ability. And in both of those cases as well, there are people who are just so bad at it, from an early age, that it's like a syndrome. In this case, it's called developmental dyscalculia. I think I didn't fit it into the navigation lectures. But there's a whole kind of developmental disability in navigation that's called developmental topographic agnosia. People were just always really awful at knowing where they are, right? And I did mention developmental prosopagnosia. People were just always awful at face recognition. In each of these cases, in the apparent lack of any evidence of brain damage and in the absence of differences in IQ or other abilities. So it seems like each of those abilities has a very broad range. At the bottom end of the range, it's really kind of affects your life you're so bad. And it's unrelated to other abilities. And I think that's pretty interesting because it goes along with the idea that those mental abilities are really distinct parts of mind and brain. You can have a crappy number sense, and it doesn't mean that you're bad at other things. You just have a crappy number sense. It's a separate system, right? OK. Approximate number sense develops slowly. It's best at age 30. You guys are still on the upswing. We won't talk about me. This is what do we have here? This is Weber fraction. So the Weber fraction is what that ratio needs to be for you to be fairly accurate on whatever criteria they chose. And so a small fraction means you're better. And so it goes down. And this is age here, best at 30. And this is reaction time, which goes up for everything. What a bummer. Anyway. Interestingly, early ability with approximate number on this kind of a test predicts later math ability with very different kinds of organized math that you learn in school. So here's a study that looked at that. They asked whether this early approximate number sense is predictive of later arithmetic ability. And so in this case, they did a task like this. And their measure, they didn't use the task I just showed you. This is another thing you can do with little kids. You just flash this up. And you just ask them, how many dots are there? And they have to say four, right? And you just measure reaction time. It's pretty basic. OK? And so then what you do is you run this on kindergarteners. And you define groups that are slow, medium, or fast at this task. OK? So then you follow them. And you look at them later, in this case, at age nine and six years. And what you see is, even these older kids, who are defined by the slow, medium, or fast group in kindergarten, this is now their accuracy at arithmetic tasks four years later. Yeah? So it's not just some weird little task that psychophysicists made up to measure God knows what. It's predictive of your later arithmetic ability. OK? So it matters. So the speed of this dot estimation task at kindergarten is not associated with later abilities of other kinds, like Raven matrices, which is one of the standard measures in an IQ test, right? It's a nonverbal and non-number kind of task. Or ability to name digits or letters or other things that you can test kids on in however old they are, nine years. OK? So it's specifically predictive of later arithmetic ability. Everybody with me? So it matters. All right, and that suggests that there would be ways to intervene in dyscalculia. Potentially, you could catch the kids early who are destined to have a hard time and maybe figure out what you could do about it. And there are efforts underway to do that. OK? OK. OK, so I'm going to show you. We're exploring these various number abilities. I'm going to show you something interesting about symbolic numbers. So far, we've been telling you about nonsymbolic numbers. That means just dot arrays. Now we're going to deal with symbolic numbers. I'm going to flash up a bunch of numbers. And you're just going to say bigger if it's bigger than 65 or smaller if it's smaller than 65. Really easy. But you're going to shout it out loud and clear. Ready? Here we go. AUDIENCE: Smaller. NANCY KANWISHER: Good. AUDIENCE: Bigger. NANCY KANWISHER: Good. AUDIENCE: Smaller. Bigger. Smaller. Bigger. Smaller. Bigger. NANCY KANWISHER: OK, did you guys see what happened there? Did you feel what happened? When the numbers get closer to 65, you're slower. Now you think about it, why the hell is that, right? If you run this in Matlab, it's not going to take longer to tell you that 63 is smaller than 65 than it takes to tell you that eight is smaller than 65, right? I assume. I haven't tried it. But I doubt it. So what does that mean? That means that even when you are dealing with symbolic numbers, numbers that you have this whole elaborate edifice you've been trained on how to operate with these guys, especially you guys, you are still invoking some kind of notion of the continuous quantity. You haven't totally left that idea behind and moved off into some abstract space. You're still, even in doing this very literal, exact symbolic number task, you find it easier when the numbers are farther apart than when they're closer. Yeah, Talia? AUDIENCE: Could it be because of the number you chose? So if you chose the numbers 60, let's say, I feel like we read left to right. And they maybe have a good concept for the number of digits that we see. So when we see a number like 62, we have to read both the digits instead of just the one. NANCY KANWISHER: Yeah, but all the ones I showed were at least two digits. AUDIENCE: Yeah. But when you read, like when you see a number like 25, you see the two. And then you automatically like know that. NANCY KANWISHER: OK, fair enough. OK, that's a good counter explanation. But you guys were slow even with 58. I think, right? We could test that. I'm pretty sure all this has been tested pretty carefully. I don't know this literature totally thoroughly. But I doubt-- it's a good alternative account. And there might be some effect. But I think it's-- oh, in fact, in fact, actually, there is, yeah, I have data coming up next. But right. Blah, blah. OK, here's the data. OK? It's pretty continuous. So I think your good, plausible alternative doesn't seem to capture very much of it. OK? So yeah, this is what you guys just did. And does everybody get how this kind of reveals that even when you think you're doing this kind of more symbolic abstract thing, you're still tapping into some kind of continuous notion? Yeah? OK. So that says not only does your ability to do that in kindergarten predict your ability to do arithmetic later, it says, even now as highly trained MIT students who do all kinds of much more sophisticated math than this, you're still invoking that same kind of continuous sense of approximate number or something like it. OK. All right, so where have we gotten? We started with this checklist of what number sense might mean. And I've argued that you adults can represent large numerical magnitudes without verbal counting, that these numbers are approximate, and that your ability to discriminate them depends on the ratio, not the difference. And I've sort of loosely told you that these experiments are generally done unconfounded from things like area and that they refer to the discrete number. OK, what about these other questions here? I haven't really shown you how abstract they are or whether you can actually use them in arithmetic operations. OK, so how would we tell that? Well, here's an experiment that we did way back. We did the very same task I did on you guys before, which has more, except the first thing was an array of dots. And the second thing was a series of tones. OK? Series of tones presented faster than you could count. Beep, beep, beep, beep, beep, like that, right? OK. And so you might think that if people are doing some literal perceptual thing that this would be just like freaking impossible, right? But it's not. Accuracy is just about the same, maybe a hair lower, but almost the same with the cross-modal comparison of which has more than with the within modality one, visual dots to dots or tones to tones. This is dots to dots and tones to tones. And that's across. It's a little bit surprising. So that shows you that whatever you're tapping into is a pretty abstract representation. It's not tied to vision. It's not tied to hearing. And it also completely eliminates worries about density or area or stuff like that because that doesn't work here at all. OK? All right. OK, can you do operations on these? Sure. Why not? You can give people a dot array and a dot array and then tell them to add and ask whether the sum of those is greater or less than that. Let's try it. OK, here we go. Everyone ready? Consider, is the sum of this plus this greater or less than this. AUDIENCE: Greater. NANCY KANWISHER: Yeah. OK? And I really didn't leave you time to count. And so whatever you were doing in adding, you weren't adding symbolic numbers. You were adding these approximate amounts. OK? Well done. And then we could go crazy and do it across modalities. I'm going to ask you to add dots to tones and ask whether the sum is greater or less than that. We won't do it. But it turns out, people are just as good at that. Amazing, huh? So where has this gotten us? This told us that whatever this approximate number sense that we all have, it's damned abstract. You can compare it across sensory modalities pretty much as well as within. And you can perform operations with it. You can do addition. And you can also do subtraction just as straightforwardly. OK? So that's pretty cool. But in all of these studies and the demos with you guys, these are done on people with years and years of training in arithmetic. And so we really want to know, are these things-- is any aspect of this system innate? Is it present in very young infants? And to what extent do animals have these abilities? OK? Well, how would we find out whether they're present in infants? Well, there's a bunch of ways. But looking direction and looking time are the key cues you have with newborn infants. And so here's a study that was done on four-day-old infants. And what they did was they presented-- they had a familiarization phase. This is done cross modally. OK? So they present either sets of 12 sounds, to, to, to, to, 12, right, of those, or ra, ra, ra, ra, present a bunch of those to infants. Or they present sets of four taking the same total duration, to, to. It's just a coincidence that it's to. This is, I think, done in French. So anyway, the infants won't be confused by the sound to. So during that, you then show the infants these arrays. And you ask what they look at more. OK? They're not told the task. There's no way to tell them a task. It's just something they do. And what you find is, in the four versus 12 case like here, that's four versus 12, the infants look more at the congruent number then the incongruent number. OK? So again, they're comparing across modality. They're hearing some number of syllables. And they're selectively looking at the corresponding number of visual forms. No instruction. No nothing. Four days old. Amazing. OK? So they can do that if the comparison is four versus 12. They can do it if it's six versus 18. But they kind of can't do it very well. I mean, it's significant, but it's not very good if it's four versus eight. OK? So they have some sense of number. But it's very approximate. Yeah? AUDIENCE: Did you say they looked at the one that matches the number, or they hear the sound that goes with it? NANCY KANWISHER: Matches. That's congruent means match. Looking time on congruent versus incongruent. AUDIENCE: Isn't that kind of different from-- NANCY KANWISHER: From adaptation. AUDIENCE: Yeah. NANCY KANWISHER: It is. It is totally different from adaptation. And herein lies a classic annoyance for developmental psychologists. Because sometimes kids match. And sometimes they show adaptation. And you kind of don't know. Sometimes you don't know which way it's going to go. I don't know. Heather, do we have any insights about how you know which way it's going to go? Or you just try and experiment and you find out and yeah? Yeah. Yeah. It does mean you have to be careful. Because if you run a whole experiment on a smallish number of infants-- and it's usually hard to get enough because people have to drive in with their kids. And this, how do you find them? And there's other developmental labs who have all the kids. And it's like you're always running experiments with barely enough kids, right? And so that means there's a problem here. Because if you would take the result in either direction, that's a statistical problem. You gave yourself two shots at it, right? And so you have to statistically discount your finding because it could have gone either way. That is if your prior hypothesis is it has to go in one direction, you're on stronger footing. But you just suck it up and run a few more kids. Yeah? OK, so good. So this also shows that ratio dependence, right? They're better at it with the big differences than the small differences. OK? OK, so that's infants having this very, very early, at least in very crude form. What about animals? OK, so let's meet Mercury the macaw. Here's Mercury the macaw. [VIDEO PLAYBACK] - To a human, the order of the symbols shown on the above screen are obvious. We have all learned from a young age which of these symbols represented-- [END PLAYBACK] NANCY KANWISHER: Oh, what a good birdie. [VIDEO PLAYBACK] - --the lowest number and which the highest. However, for Mercury, the blue-headed macaw we see here, he has had to learn by trial and error the specific order to press these symbols to get a piece of food. It took him quite a long time. Mercury's brother Mars can do a bit better than that. He has begun to learn the more general concept. That is the symbols will always have an order. So when presented with a new list, he was able to rapidly decipher the order of new symbols, in this case kingfisher, warhead, hawk, hummingbird. Pressing randomly on the screen would have led to him receiving the correct answer less than 1% of the time. He's clearly doing better than that. This is interesting, as it shows the very basic aspects of cognition related to numbers are present in an animal that is very distantly related to humans. [END PLAYBACK] NANCY KANWISHER: OK, mostly, I just showed that because it's cute. But it's impressive ordering. OK? Still, he's kind of slow. I think it only goes up to four things. OK, so now we're going to meet the chimp Ayumu, who lives in Kyoto and who's the son of a very famous chimp named Ai, who was like a number wiz. But anyway, here's Ayumu. [VIDEO PLAYBACK] [END PLAYBACK] NANCY KANWISHER: I know. I can only catch the first three. And then it's like I can't even tell if he's correct, except from the tone. Pretty good. [VIDEO PLAYBACK] [END PLAYBACK] NANCY KANWISHER: Oh, got one wrong. Anyway, mostly gets them right. Pretty impressive, huh? OK, so that's cool. And order is clearly relevant. It's part of the space. But it's not the same as quantity or number, right? OK, so now we're going to skip to the honeybee, just for kicks because this paper just came out a month ago. And I think it's awesome. Honeybees have 1 million neurons. And if you're impressed, don't be impressed. Remember like a mouse has 100 million. And we have 100 billion. OK? Six orders of magnitude. OK, so 1 million is like not-- no. eight orders of magnitude. So 1 million is not that many, right? OK, and further, these guys branched off from us, evolutionarily, a very long time ago, 600 million years ago. So they're tiny little guys, not very many neurons, totally different kind of thing. Who would think they have any kind of numerical abilities? Of course, they wouldn't, right? Oh, and yet, they can do arithmetic. OK, so here's the design. So here's what these guys did, this wonderful lab in Australia. I love this stuff. OK, so they trained these honeybees. This was a chamber like this. Honeybees fly into the chamber. And they see a number in a color right here. It's blue. And it's two. OK? And then there's a little entry hole. And they can choose to play or not play. If they go into the chamber, then they're in this interior space, where they get to make a choice between that pattern and that pattern. OK? And there's a little pole underneath each pattern. And if they light and they land on the pole, they can get some liquid. OK? So in the blue case, they're rewarded over trials. That if it's blue, that means they should add one to this number. And hence, that would be the correct answer. And that's the incorrect answer. OK? That would be amazing. Yeah? And if they choose the wrong number, they get some nasty quinine. OK? All right, in contrast, if the shape out front is yellow, then they have to subtract. So that means they have to keep track of this number and go in there and choose that number minus one. All right? OK, so keep in mind, oh, so they balance the total surface area. It doesn't look like in this figure. But it says in the method section they did. I believe them. And further, realize that when the bee is in here, he has to be holding that number in memory and adding one to it or subtracting one to it to figure out what to choose here. So this is pretty sophisticated. It's not like they're side by side, right? OK, and yet, they're pretty good at it. Here's accuracy over training trials. By 100 trials, they're over 80% correct. Pretty amazing, isn't it? OK, so then, in any good animal or infant cognition study, you want to show whether it generalizes. So then they test the same ability with new numbers. I forget what this range was. But it went one to four or something like that. And then they go to five or six, just to generalize the numbers, and different shapes than were used in the training trial. And the accuracy is around mid 60s. It's not quite as good. But it's still very good. They're not being reinforced here. And they're still doing the task. Now, what are the pink and blue bars? OK, so you might think, well, is a bee just going to the one that has more or less? So instead of learning add one, he's learned go to the larger number, larger than the one that you saw at the entry chamber, or go to the smaller number. But no, that's not what they're doing. Because the pink bars show the performance when both of the options are in the same direction, right? So the thing is blue. So he's doing addition. And he sees a two. And he goes in, he has a choice between three or four. He can only do that if he knows the difference between adding one and just taking the thing that has more, right? And he's well above chance in the pink bars. OK, so he's not just saying, choose the one that has more or the one that has less. He's adding one, pretty accurately, I mean sort of accurately. Better than chance. OK? All right, now that's pretty cool. But adding one, subtracting one, it's cool. But do they really have abstract concepts? Do they understand the concept of zero? OK, so paper was published last year arguing that they have the concept of zero. Here's how it goes. Same lab trains them, in this case, just on greater than or less than. So the bees are given a choice like this. And one set of bees is trained on greater than and one is trained on less than. So this set of bees trained on greater than chooses this one and then this one and then this one and so on. OK? Another set of bees is trained to do the opposite. All right? OK, so that's the training phase. Then we want to test in a generalized situation. So now they're tested with different shapes and different numbers, so threes and fours were. Maybe threes weren't used. I forget. There's some numbers in here that were not used before. OK, so you test them with new shapes. And here is accuracy for less than or greater than. Chance is 50%. And they're 75%. Not bad. OK? So they get more than or less than. OK, now we want to test the generalization. OK, oh, yes, sorry. This is where they changed the range of numbers. So the bees had not dealt with sixes before. So now they still have to do greater than or less than with a new numerical range. And they're still well above chance. OK? So then finally, they test zero. OK? So the bees that have to do less than have to say which of those is correct, all right? And you can see-- where did it go? Where's the zero one? Right here. And they're well above chance for both less than and greater than. OK? So we could quibble about whether that's a concept of zero. But the cool thing is these bees had not been tested with a blank card before. And they spontaneously get the idea that that is less than one or two or three or anything else. Yeah? So arguably, they have a concept of zero with no training and only 100 million neurons. OK, so all of that is in trained animals. And we can see some of these kinds of abilities even with untrained animals. And I will tell you just one more animal experiment because it's my all-time favorite ever and the simplest one in the whole set. This was done a long time ago by Church and Meck. So here's what they did. This is done in rats. They have a training phase, where they train the rats to press the two lever if they see two light flashes or hear two sounds. And they press another lever, the four lever if they see four lights or hear four sounds. OK? That's kind of basic animal training. It's a rodent. They're good at this. No big deal. But then after the animals have learned this, they spontaneously throw, in the testing phase, a trial with two lights and two sounds. And the rats press the four lever, first time. No training. No nothing. Spontaneous addition. Spontaneous abstraction across tones and lights. Pretty awesome, huh? So it's not just that you can reveal these abilities with elaborate training. OK, so we have all of these different kinds of evidence of an abstract number sense. And they're present in newborn infants. And they're present in animals. And they just seem to be part of our basic cognitive machinery, machinery that we share with animals. So how are they implemented in the brain? OK, so a little neuroanatomy reminder of some basics. This is a weird angle of a brain. It's kind of like this, kind of back of the head, front of the head, temporal lobe, frontal lobe around the corner. Everybody oriented? There is one of the longest sulci in the brain that starts about here. On me, it goes like this. And it curves around like that. It's back here. It goes up. And it curves over. OK? It's called the intraparietal sulcus. And I mention that just because it's in a lot of the number literature. You saw it in the paper you guys read for last night. And above it is the superior parietal lobule. And below it is the inferior parietal lobule. And none of that matters other than that a lot of the action is in the parietal lobe, particularly up here around the intraparietal sulcus. OK? All right, so studies that have looked at this includes some classical studies of patients with brain damage and something called acalculia. That means loss of ability to calculate. OK? And so there's two basic kinds of acalculia that are really interestingly different. So there's one acalculic patient who has left parietal lobe damage, that same region I just talked about. And this person is bad at approximation. So the kinds of dot array tasks that I gave you guys, this guy, after brain damage right here, is really bad at that kind of stuff. And interestingly, he's more impaired on subtraction than multiplication. So for example, hes worse at, what is seven minus five than what is seven times five? So think about that for a moment. And think about what that might mean, especially in light of another acalculic patient who has a very different presentation. He's got left temporal damage. His approximation is fine. So all those kind of dot array kind of tasks and tone tasks that I told you about, he's good at. This guy shows the opposite. He's more impaired at multiplication than subtraction. So do you guys have any-- oh, so first of all, you put these two patients together, and what do you have? AUDIENCE: Double dissociation. NANCY KANWISHER: Yeah? What? AUDIENCE: Double dissociation. NANCY KANWISHER: Double dissociation. Right. Two patients with opposite patterns of deficit, right? If we just had one, then we could maybe tell a story. But it wouldn't really know. But we have two, and they have opposite patterns. And now that really kind of constrains the interpretation. David. AUDIENCE: Can the first person add fine? NANCY KANWISHER: Good question. He's not very good at adding. AUDIENCE: Oh. NANCY KANWISHER: Thoughts? What do you think it means? AUDIENCE: It might mean that the addition and subtraction [INAUDIBLE] use the same like-- NANCY KANWISHER: Used what? AUDIENCE: Like they use the same area. NANCY KANWISHER: Yeah. So one hypothesis is that addition and subtraction are just a different beast than multiplication. Different parts of the brain do those things. Totally possible. But there's a kind of more intuitive interpretation. AUDIENCE: Well, I think people tend to memorize times tables. NANCY KANWISHER: Bingo. Bingo. Often, like the right answer is something that's like right in front of you. Just think about, what is it like to do that? How do you do seven times five? You don't think about the meanings of the numbers. You just blurt out 35. Right? Right? It's not a very rich number task. I mean, it's a number task. But it's a concrete, rote, verbally memorized thing. Right? And so the idea is that those verbalized concrete number facts are in one domain. One set of brain damage would impair those. And it's a different thing to impair the actual representation of numerosity. And the idea is that this person is the one with the real damage to the approximate number system. Right? Yeah? AUDIENCE: Does that mean that patient can be it is a problem doing the seven times five normally. But when they ask for summing seven for five times, they're not very good. NANCY KANWISHER: Yeah. Well, I think the approximate number system might have a tough time dealing with summing seven five times. So yeah, it has limits, right? It can deal with it can add two approximate things. But you might really lose your mind if you tried to do a whole string of it. Yeah? Yeah? AUDIENCE: If he was working on the same digits, like maybe seven plus seven or seven minus seven, expect him to maybe do that fairly easily if that's the case, right? NANCY KANWISHER: Say more. AUDIENCE: If it's a case that his approximate-- NANCY KANWISHER: Yeah, yeah. Yeah. AUDIENCE: He should be able to do seven minus seven fairly easy. Because you know that when you subtract the same things, you're going to get zero. NANCY KANWISHER: Yes. But it's an interesting question, actually, whether that would be part of that system or whether that's kind of more abstract formal thing you learn. So I think it depends how you do it, right? So one of the ways-- I didn't talk about this. But those same experiments adding, say, adding dots to dots, those were also done with little kids. And there, what you do is you show-- I don't really remember what it is. But you show some array of things, and you hide it behind a screen. And then you show another array and hide it behind the screen. And then you reveal the screen. Like how many things are there? That kind of stuff works spontaneously. So it might tap into that system. I think that's an interesting question. I'm not totally sure how it would go. Yeah? AUDIENCE: So the second person is bad at recall across the board? Or is it just with numbers? NANCY KANWISHER: Just with numbers. Yeah. I mean, there's always a little bit messy. The patient literature is always like some other random stuff. And how do you account for that? And there's lesions in other places. But to a first approximation, these are reasonably number-specific deficits. All right? OK, so that's a bit of a hint from the neuropsychology literature. But there's mainly these two patients and some other like messier ones. And so one wants to use neuroimaging to get a better picture of it. Of course, that's been going on for a long time. And so here's one of the early papers from Stan Dehaene's lab. This is a top view of the brain. So this is this parietal zone. And this is what is often referred to as the horizontal segment of the intraparietal sulcus. hIPS to its friends. And it's that sulcus I talked about that goes up like this. It kind of curves over. And it's like this bit right there. OK? That little orange strip. And so what he's saying in this review article from a long time ago is that that region is activated only when you do calculation. He means basic arithmetic in this case. Not when you do all these other things. But when this paper came out, I'm like, yeah, right. I don't think so. I can't tell you how many experiments I've run and seen big ass activations right there on tasks that have nothing to do with numbers. So looks good. Sounded good. He got away with it for a while. And it's not true. Yeah? AUDIENCE: So is the reason sort of high enough that you can zap it? NANCY KANWISHER: Terrible. Being filmed too. He's a really smart, nice guy. I just like when people are a little bit fast and loose and make a big claim, which you can tell at the time isn't quite right. It's a little bit annoying. Anyway. Sorry. Go ahead. AUDIENCE: Yeah. Is the region high enough that you can zap it? NANCY KANWISHER: Ah. We're getting there. Yes, indeed, you can. But let's do a little more basic stuff first. OK, so the claim is that this hIPS thing is the locus of the approximate number system. That was the early claim. OK. And for further, the claim implicit in this article in this figure is that it's involved in numerical representations only, not any of these other things, grasping tasks, manual tasks, eye-movement tasks, et cetera, et cetera, et cetera. OK, really? And as I mentioned, like me and lots of other people had seen it looks like the same regions activated in all kinds of other situations, especially those involving reasoning about spatial location. You guys got short shrift six weeks ago. I meant to talk about the parietal lobe and its role in high-level vision. And it just somehow went by the boards. But all this stuff is involved in aspects of vision, particularly spatial vision, knowing what is where. OK? And so there's an alternate view, which is that there's no specific brain region that's specifically all only involved in discrete number per se. Instead, there's a common region for processing magnitude of almost any dimension, whether discrete or continuous, right, that approximate number system or your exact number system, and that it builds on previous representations of space. OK? For example, the number line, right? So you guys read this article for last night. And just to review what the key point was, this is, again, the kind of aerial view with the parietal lobe here. And that's the hIPS region, yeah, that was in the previous slide. And you can see it's this horizontal part of that sulcus way up in the parietal lobe. And the yellow and green means that there's overlapping activation for both symbolic calculation, that's like with symbols, and for nonsymbolic calculation. That's like dot arrays stuff like that, right? And so it's activated for both of those. And the point of this paper is, first of all, that there's also overlap with the eye-movement system, right? And so here, they're really asking, is this spatial representation kind of co-opted in your representation of number using a kind of spatial number line, right? It makes perfect sense. Animals need a representation of space. It's like extremely basic, right? And once you have that, you can co-opt it and represent numbers in that same spatial code. And as you guys all read, the cool result from that paper, which is also from Stan Dehaene's lab, is that when you take that region right in there, you take those voxels in there, and you train them on making leftward versus rightward saccades. So now you have a classifier that looks at the pattern of activation there, can distinguish a leftward versus a rightward versus saccade. I'm just reviewing this. Hopefully it was clear enough. That same classifier can then distinguish subtraction versus addition. Did you guys all get that from the paper? Yeah? It's pretty cool, isn't it? Anyway, so that's kind of nice evidence that the same spatial system that's used in spatial attention and eye movements has been co-opted to represent numbers as well. OK. All right, so I just wanted to incorporate that. In case anybody missed what the paper was about, those were the key points. Other early studies have asked more directly this question of whether different kinds of magnitude are all represented together in the brain. And this study is quite clever. They used a variant of the fact that I showed you guys before. Remember when saying whether the number is greater or less than 65, it's harder when it's closer to 65 than when it's farther from 65. OK, even though I was showing you symbols, that was key thing, right? So that's called the distance effect, right? And that's true for all comparisons. And so this study exploits that distance effect. And they use stimuli like this. And they ask, which one is larger? And it could be larger in absolute size. Like the two is larger here. Or it can be larger in number meaning like the seven is larger. OK, so in different blocks, you're saying, which one is physically larger? Which one is numerically larger? Which one is brighter? That would be this one here. And then they just have a control with letters. OK? And so then-- sorry. The design is slightly complicated. So there's these three main tasks and a control task. But then within each, they have the difficult version and the easy version. And the difficult version is when the comparisons are close, two similar brightnesses, two similar numbers, two similar sizes versus two larger ones. OK? So that's what all this garbage shows. OK, so then you do that subtraction. You look, and you say, OK, what parts of the brain are more active when you do the difficult versus easy number comparison? Like saying, which is larger? It's not that difficult. But two versus three versus two versus seven. OK? And so what they find is that similar regions of the brain are active for all three of those kinds of comparisons. OK? So it's not like you get just one for symbolic number or for the two magnitude tasks. All of those different kinds of magnitude activate the same regions. And so the conclusion is that number and size and brightness engage a common parietal spatial code, OK, an overlapping region for all of these. Does that make sense? OK. And so that shows, in this case, that it's not just symbolic number but also magnitude. Which one is bigger, right? It's kind of continuous magnitude idea. OK? OK. Right. So one worry is that, in each of these cases, they're comparing a difficult condition to an easy condition. And so maybe the regions they got are just engaged in any kind of task difficulty. Maybe if they had done a syntactic task on language stimuli that was difficult versus easy, they would get the same things. From this experiment, we don't know. We'll talk more about that in a couple weeks when we talk about language, right? But here's at least one control that deals with that sort of and which does a TMS experiment, as you suggested a while back. OK, so this is kind of cool experiment. I mean, it's weird, but sort of cool. OK, so what do they do? They use-- OK, so they have, again, an easy task and a hard task. Again, it's the thing greater or less than 65. Not very hard, right? The hard one it's that hard. But so is it greater or less than 65? And it's either a symbolic number, or it's a dot array. You can't really see it, but there's a bunch of teeny dots in there. Or in the other condition, they have to say whether that ellipse is more horizontal or vertical. OK? And so you spend a lot of time, before you run the experiment, measuring reaction time and accuracy to balance difficulty within the easy conditions and balance difficulty within the hard conditions. OK? So then what they do is they do something called offline TMS. OK? Offline TMS, I didn't talk about this much before. The standard kinds of TMS, you stick the coil right on the subject's head. There's a subject doing a task on a monitor. And somebody is standing there holding the coil. It's really kind of rudimentary. And right at a key point of the trial, you deliver a zap to disrupt that part of the brain. And you find out how much that interferes with performance on that task. That's the standard online kind of TMS thing. But there's also offline TMS, where you zap people at a slow rate for like 10 minutes. And then the idea is that you've kind of generally disrupted that piece of brain for, say, another 10 minutes. It's a little bit scarier. But it's just like 10 minutes, right? OK, and so that way, you don't have to be quite so fancy about the precise timing. You can just kind of reduce its effectiveness for a whole 10 minutes. OK, so that's what they did here, offline TMS. So you sit there and get zapped for 10 minutes slowly here. And then you do some math tasks. OK. OK, so what they find is that zapping the left intraparietal sulcus disrupts the magnitude tasks on both numbers and dots. But it doesn't mess up the shape tasks with the ellipses, even though the ellipses are balanced for difficulty. OK? So that's at least a little bit of an argument that it's not just about generic difficulty, at least in this experiment. OK? All right, I think that's what I just said. So that's some evidence for a role of at least the left intraparietal sulcus in both symbolic and nonsymbolic number. Again, nonsymbolic number just means dots without Arabic numbers, not just any difficulty. All right, so that's all very nice. But it's crude as hell, right? We found these big, blurry chunks of brain that are implicated. And we zapped a big chunk of brain and slightly reduced performance. It's like, OK, better than nothing. But it's not very impressive. What are the actual neurons doing in the brain? Well, now it becomes really important and useful that this approximate number system that I've been talking about is also present in animals. And that means we can use animal models. And we can record from individual neurons in the parietal lobes of monkeys when they do number tasks to find out what actual neurons are doing. OK? And so there's a guy named Andreas Nieder, who's been doing this for a long time. And he has some pretty remarkable data. And so he starts by training monkeys to do a number task. So here's what the monkey sees. Monkey sees a sample, some number of dots. And then there's a memory delay, in this case, one second. And then he has to do a matching task and choose that array, not this array. OK? So he's got to remember that there's three dots and choose the right three. And notice that the sizes and configuration of the dots have changed. So we have to do something more like remember three in whatever mental monkey E's version of three exists. OK? OK, simple matching tasks. Then he records from neurons in the parietal and frontal cortex in monkeys. And he finds neurons that are sort of specific for number. OK, so here's time in that task. This is the time that the sample is presented right here. And here is the response of a single neuron that likes two more than anything else. OK? And that too, notice, is all different kinds of spatial arrangements and sizes of the dots. What's common about all of them is that it's two. Next best, it likes four. OK? And it generalizes across number from there. So it's approximate. It's not like high for two and zero for everything else. It's got a kind of generalization gradient. But it prefers two. OK? So that's a number neuron. Yeah? OK, here's a six neuron. This neuron likes six. Here it is same task during presentation of trials here. Red is six. Next closest is like eight and maybe 10. So it also generalizes as well, but it responds more to six than anything else. Pretty awesome. Huh? OK, now that doesn't tell us how it was computed, right? So finding a single neuron that does something spectacular is thrilling. We all love it. It's great fun. And we're closer to the neural circuit because we found a neuron that seems to be part of the action. But notice it doesn't tell us how that neuron made that computation, right? What are the circuits that led into it, that enabled it to be specific to six or two? But it's still cool. OK, but next, we want to know, how abstract are those neurons? This is just dot arrays. OK? And they're just presented in one array. So next, Andreas Nieder trains his monkeys to keep track of the number of things that happen over time. It's not a spatial array. It's a temporal sequence. OK? So we have to see that there's four things coming in here and then choose the array that matches with four. OK? See how this is the way to ask how abstract those number neurons are. Are they really representing the abstract magnitude of two or six or whatever it is. Or are they representing something about the shape of a two-type array or a six-type array. OK, and they can also test over different modalities. So now they present four different tones. And the monkey has to choose the four dots. OK? Now it's both over time and over sensory modality. So how abstract are those number neurons? OK, they're pretty abstract. So here are a few number neurons. Cell one is in the blue colors. And here is its response in light blue to dots, one dot, two dots, three dots, four dots. And here is the same cell responding to sounds. It's specific to one, both for sounds and dot arrays. Isn't that cool? And you see the green cell is selected for two, whether in dots or sounds, and so forth. Pretty cool, huh? So these are very abstract number neurons. Does that makes sense? OK. OK. OK, now these monkeys are trained on number tasks. So you might think that these kinds of abstract number neurons-- and they're trained to do the generalization from tones to arrays. So maybe those neurons wouldn't live in their brains if they hadn't been trained to do that. But I don't have time to show you all the data. But in subsequent work, the same team has recorded from monkeys before any training. And you find similar number of neurons. So it does seem like these are things that exist in-- and remember that's consistent with what I said before, which is that a lot of these number abilities are present in animals without any training and in newborns. And so it makes sense that some of those neurons would be around even in advance of any training. AUDIENCE: How many neurons did they have to look at it to find? NANCY KANWISHER: Oh, that's a good question. I forget what percent it is. We could look it up in the Nieder paper. Yeah. It's not like you record from thousands, and you find 10, right? Remember they know where to look from, first, the human lesion literature and then the human functional imaging literature. And then there's also monkey neuroimaging literature where you can have monkeys doing dot tasks. So you can know where to look. Because the brain's a big place. If you're just sticking electrodes all over, God help you, right? So they know to go up in that parietal lobe if that region is homologous between humans and monkeys. And there's a lot of other evidence that that region is homologous. So they know how to get in the right zone. And I'm sure, once in the right zone, they're not all number neurons. I'm sure it's a relatively small percent. But it's not a trivial percent. Yeah? AUDIENCE: Do we have sense for how fractions are represented? Because all of these seem to be discrete. Or [INAUDIBLE], any ideas? NANCY KANWISHER: Yeah. Well, it's tricky because, certainly, at the single unit level, you'd have to either find some natural version where monkeys think about fractions naturally or teach them about fractions, which would be really hard. Because, for some reason, fractions are just really hard. Like all the people who study math education, it's like the key problem is, how do you get kids to understand fractions? I don't know why they're such a tough thing. But apparently, it's like a real dividing line, the kids who get fractions and the kids who don't. So I'd have to think. But, occasionally, there are patients with electrodes in their brains. And one could look at that. Actually, I took this slide out, but there's a paper that came out last year where they found number neurons in humans as well. I took it out because I didn't know how to integrate it in the lecture. Because the number neurons are deep in the medial temporal lobe, far from the parietal lobe. And it's like, I don't know how that fits. I don't know if that's the same thing or something else. But anyway, there are at least some number neurons that have been found in humans. And you could, in principle, look for number neurons up in the parietal lobe. In fact, I have a guy I'm trying to collaborate with. I'm begging him to collaborate with me. He's got two people who have arrays of electrodes chronically implanted right up in this region here because they are paralyzed. They had spinal damage. And like Michael Cohen's lecture, he's got arrays of electrodes where he's trying to use the neural responses there to direct robot arms. And so there's two of these people who have these chronically implanted things. I'm like, oh, please, please, please, can I collaborate with you and get responses from your patients' neurons? Was there a question over here a moment ago? Sorry. I thought I saw a hand go up. OK, so let me wrap up. So I've been arguing that this approximate number system is shared with animals and newborns. It's a pretty basic system that lots of animals have. It follows Weber's law, which you should remember. I don't like testing you guys on esoteric facts. But Weber's law is a very fundamental fact. And you should know it about perception and, in particular, about number. It tells you that the ability to discriminate two numbers goes as the ratio, not as the difference of those numbers. And that these approximate magnitude representations measured both behaviorally and neurally in humans and animals are very abstract to the particular objects, to the modality, to whether they come in over space or time, et cetera, whether they're represented in symbols or arrays of items. OK? I mentioned that there are big individual differences in humans in the precision of the approximate number system. And that is predictive of later arithmetic abilities independent of IQ. And we talked about the horizontal segment of the intraparietal sulcus as a key locus for the approximate number system in humans, including number-specific neurons. And we also talked about, both in some of the papers I mentioned and the paper you guys read, that there seems to be that that approximate number system up here in the parietal lobe, so far, doesn't seem to be one of these extremely specialized systems like faces and motion and navigation, which may turn out to be less specialized later with pending more data. But at the moment, we can already see that these number representations overlap a lot with representations of space, shown perhaps most dramatically in the paper you guys read showing cross decoding between eye-movement direction and arithmetic operations. OK, hang on. I'm almost done summing up. Number neurons, we talked about that. Yes, so we'll give the last word to Stan Dehaene, who started off with this very extreme view and has evolved to a still interesting but slightly less extreme view. He says, the brain treats number like a specific category of knowledge requiring its own neurological apparatus in the parietal lobe. But when it comes to subtler distinctions, such as number versus length, space, or time, the specificity of hIPS vanishes. No part of hIPS appears to be involved in numerical computations alone. In fact, he goes further to say that the human brain, in general, is neither anisotropic white paper, like equipotential, where all the regions are equivalent, nor a neat arrangement of tightly specialized and well-separated modules. All right? Anyway, OK, there was a question. Sorry. AUDIENCE: [INAUDIBLE] just then having this, I guess, easier time with approximate numbers, given more of an interest in that. NANCY KANWISHER: That's a really, really good question. And I am sure there are data on that. And I don't know what they are. But I will go look. I always say that. But, Dana, will you send me an email right now to go look up whether the prediction from childhood ANS to adult arithmetic abilities has to do with an interest or you might say just an emotional response. Like if you suck at it, it feels bad. And you become avoidant, and you get all dysfunctional about it, right? We all have-- I mean, most of us have domains where we do that. And math phobia is a real thing. And who knows. It could start in there. So yeah, good question. I don't know. I will look that up. Other questions? OK, see you guys on Wednesday. |
MIT_913_The_Human_Brain_Spring_2019 | 2_Neuroanatomy.txt | NANCY KANWISHER: So seeing where animals are going, so you can avoid them if they're coming after you or so you can catch them if you're going after them, right? One of the arguably uniquely human abilities is precision throwing, right? No other animal can do that. That's a very human thing. Although, visual motion is shared with lots of ability to see motion is shared with lots of animals. What else did you notice? What else seemed funny or harder to discern with stop motion? Yeah? AUDIENCE: We care about small details like [INAUDIBLE] to understand what the person is seeing. NANCY KANWISHER: Yeah. Yeah, so I was making notes to self. I haven't done that demo before. But in future, it would be really good to have the audio quality terrible. Because if the audio quality is terrible, you would lean more on lip reading. And we might have noticed more. But it's really hard to do that probably even at relatively fast flicker rates because that motion information is important. Absolutely. What else? How about beyond just lip reading? What else did you notice about the faces, mine or Jim's? Could you-- yeah? AUDIENCE: They were static. So it was kind of hard to tell like emotion because a lot of the ways we express emotion is very nuanced. NANCY KANWISHER: Exactly. Exactly. Facial expressions are incredibly subtle. Like little microexpressions flicker across the face in a tenth of a second and go away, and you guys detect them. Like we're very, very sensitive to those things. Sometimes if you see somebody in a hallway and, for a moment, there's an expression that flickers across their face and then they give you a normal smile, but you can tell from that expression that actually they didn't want to see you, for whatever reason, right? We catch those things. We're really, really good at catching those little fleeting expressions. And those probably have to do with not just sampling with fine temporal frequency but probably seeing the direction of motion of each little part of the face. OK? OK, so this is just common sense reasoning about what we might have motion for. OK? And so you guys got all the things that I had in mind. OK, so now the next question, just kind of thought question, speculation question, given these many different things that make motion important to us, biologically, ecologically, in our daily lives, maybe that's important enough that we might allocate special brain machinery to processing motion. What do you think? Important enough? Could you get by if you lived in a strobe world all the time? Could you survive just fine? Hard to say, right? Might be hard. I mean, we probably don't need to go hunting down predators. But you walk across Vassar Street. And there's some pretty dangerous predators coming down Vassar Street in the way of cars, right? You need to know where they're going and whether you can cross in front of them. So it's actually pretty hard to live life without being able to see motion. And I'll tell you about a woman who has that experience later in the lecture. OK, next question, just think about this. I'm not going to test you on it or anything. It's not the topic of this course. But it's a perspective you should take. Imagine that this were a CS course and I gave you a segment of video. And your task was to write some code that takes that video input and says whether objects are moving in that movie or says which objects are moving or how much they're moving or what direction they're moving. What kind of code would you have to write to take that video input to try to figure that out? OK, so just think about that. We're not going to be writing code in this class. But a lot of what we're going to be doing is thinking about, how do you take this kind of perceptual input and come out with that kind of perceptual inference? And what kinds of computations would have to go on in between whether those computations are going on in code that you guys write or in a piece of brain that's doing that computation? And thinking about how you might write the code gives you really important insights about what the brain might be doing. OK? All right, so that's the point of all of that. The Marr reading talks about all of this. And the key point we're trying to get here is that you can't understand perception without thinking about what each perceptual inference is necessary for ecologically in daily lives and about the computational challenges involved in making that inference. OK? So we'll get back to all that next week and beyond. But meanwhile, here's the agenda for today. So here's the agenda. We just did the demo. We're now going to skip and do some neuroanatomy, absolutely bare basics. Because on Wednesday, we have this amazing opportunity to have one of the most famous neuroscientists in the world do a dissection of a real human brain right here right in front of you. It's going to be awesome. And I don't want to waste that opportunity or embarrass ourselves by having people not know the bare basics. So we're going to do the bare basics. It's all stuff you should know from 900 and 901. And I'm going to whip through it fast, so we can get to more interesting stuff and get back to visual motion. OK? That's the agenda. All right, so some absolute bare basics of the brain, the human brain contains about 100 billion, 10 to the 11th neurons. And that's a very big number. That's such a big number it's approximately Jeff Bezos' worth. Well, it was until Mackenzie got into the picture. So we'll see. No, you don't need to remember this number. Just know it's a really big number. Basics of a neuron, here's a neuron. A neuron is a cell like any other cell in the body. It's got a cell body and a nucleus, just like any other cell in your body. But the thing that's distinctive about a neuron is it has a big long process called an axon. It's got a bunch of dendrites, the little processes, the little thingies near the cell body. And out at the tip of the axon, that's your classic neuron. Many neurons have a myelin sheath, a layer of rolled up fat around the axon made up of other cells. That makes the axon conduct neural signals faster. OK, you should know all that. I'm not trying to insult your intelligence. I'm just trying to make sure everybody's with the program here. OK, so you have thousands of synapses on each neuron. And that means you have-- to put it technically-- a shitload of synapses in your brain. OK? Another important point, the brain runs on a mere 20 watts. And if you're not impressed with that, reflect on the fact that IBM's Watson runs on 20,000 watts. So one of the cool things about the human brain is not just all the awesome stuff that we can do that still no computer can do, that I talked about last time, but also how incredibly energetically efficiently we do it with our human brains. So most of this course is going to talk about the cortex. That's all the stuff on the outside of the brain. That's that sheet wrapping around the outside of the brain, that folded outer surface. It's approximately the size and area of a large pizza. But there are lots of other important bits too. And I'm going to just do whirlwind tour of those other bits now. OK, so you can think of the brain as composed of four major kinds of components. Deep down in the bottom of the brain, you have the brain stem, where the spinal cord comes in here. And the rest of the brain is up there. And the brain stem is right down here. And the cerebellum, this little cauliflower like thing that sits out right back there. And in the middle of the brain, you have the limbic system with a whole bunch of subcortical regions. And we'll talk about a few of those in a moment. And you have white matter, all the cables and connections that go from one part of the brain to another part. This is an actual dissected human brain. And all those kind of weird fibrous things are bundles of axons connecting remote parts of the brain to each other. You can see them in gross dissection. OK? And of course, you have the cortex. OK, so these are just four major things to think about. And before we spend the rest of the course on that, we're going to do just a teeny little bit on the other major bits. OK, and I'm going fast. So just stop me if any of this isn't clear. All right, so the reason we're doing this in part is that, with a dissection of a brain, some of the main things you see are those subcortical structures, right? And so even though the course is going to focus on the cortex, each little different bit of the cortex to the naked eye looks like any other bit of the cortex. It's the subcortical stuff that looks different, right? So that's why we're doing this. OK, bare basics on the brain stem, you can think of it as a bunch of relays in here, different centers that connect information coming up from the spinal cord and send it through into the cerebellum. So it's, in many ways, the most primitive part of the brain. That means it's shared with animals that branched off from us very far back in mammalian evolution. But it's also essential to life. OK? So you can get by with most of your cortex gone. Like you may not have a lot of fun. You may not really know what's going on. But you will stay alive. But you can't get by without your brain stem, right? It controls all kinds of basic crucial bodily functions, like breathing, consciousness, temperature regulation, et cetera. So it's not interesting cognitively. But it's crucial for life. Cerebellum, this beautiful thing here, it's basically involved in motor coordination. But from there on out, there's a huge debate about its possible role in cognition. And so there's lots of brain-imaging studies where people find that the cerebellum is engaged in all kinds of things from aspects of perception up through aspects of language understanding. You can find activations in brain-imaging studies. Nonetheless, the best guess is that you actually don't need a cerebellum for any of this. So if anybody's interested, I'm going to actually try to remember to put it up as an optional reading on the site. There's a recent article in The Atlantic or The New Yorker about a kid who had no cerebellum. And he learned to walk late and slow. Nobody knew what his problem was. But he learned to do pretty much everything. Like he's pretty much fine. His motor coordination isn't great, but he's fine. Yeah? AUDIENCE: How would you define the consciousness in this context? NANCY KANWISHER: Oh, that's a good question. And it's a big question. And it's a question that nobody knows how to answer, not just me. So Christof Koch, who does more work on the neural basis of consciousness than just about anybody, has been going around saying, for about 15 years, we must not get stuck on a premature definition of consciousness because we don't know what that thing is that we're trying to understand. So I'll hide behind Christof's parry of that question and say we'll talk about it later in the course. But there are many different ways of defining it from the difference between being awake versus asleep, which is some of the functions that go on here, the difference between being knocked out and completely unconscious under general anesthesia, which is different from being asleep. Those kind of states of consciousness are regulated, in part, in here, yeah. OK, so you can get by without a cerebellum. But it's not recommended. Moving right along, all those subcortical bits, we're just going to talk about three of the most important ones, the thalamus, this big guy right smack in the middle of the brain, very large structure, the hippocampus, and the amygdala. OK, let's talk about the thalamus. Think about the thalamus as a Grand Central Station of the brain, OK, with all of these connections going to all those parts of cortex coming in and out of the thalamus like that. OK? So one of the key things about the thalamus is that most of the incoming sensory information goes by way of the thalamus en route to the cortex. OK? So if you start with your ear, there's sensory endings in your ear that we'll talk about later in the term. And they send neurons into this, the thalamus here, this yellow thing, through a bunch of different stages. They make a stop in the thalamus. And then they come up here to this green patch, which is auditory cortex. OK? Similarly, somatosensory endings, touch sensors in your skin that enable you to feel when you're being touched come in through the skin. And they make a stop in the thalamus. And then they go up to somatosensory cortex up there. OK? Similarly, visual signals that come in from your eyes make a stop in the thalamus and then go up to visual cortex. OK, what's the name of the structure in the thalamus that those axons make a synapse in? Coming up from the eyes, you make a synapse here. And you go up to visual cortex. AUDIENCE: LGN. NANCY KANWISHER: LGN, perfect. What does it stand for? AUDIENCE: Lateral geniculate nucleus. NANCY KANWISHER: Perfect. OK, you should know that. This is review from 900, 901. OK, yes? Sorry. OK, which sensory modality does not go through the thalamus en route to cortex between the sensory nerve endings and the cortex? Sorry? AUDIENCE: Olfactory. NANCY KANWISHER: Yes. Yes. You guys are on the ball. Yes, olfactory system is the one sensory modality that doesn't make a stop in the cortex. You can sort of see that here. From the nose, it goes straight up into olfactory cortex right there. All right, so that's the standard view of the thalamus is this kind of like relay station where all the external sensory information comes in there, makes a stop, and then goes up to cortex. OK? That's my thalamus act. Boom. Like that, right? OK. But, increasingly, there's evidence that the thalamus is much more than a relay station. And why would you bother with a relay anyway? Kind of doesn't mean anything. Kind of means like we don't know what's going on here because you wouldn't just make a synapse for no reason, right? OK, and so the first thing to note, is there are lots of connections that go back down the other way? There are 10 times as many connections that go from primary visual cortex right here in me, right here in this guy in red, there are 10 times as many that go backwards down to the thalamus as go forwards. That's mind blowing, right? Information comes from the eyes up into the brain. What the hell are those things doing going backwards, OK? Well, they're doing all kinds of interesting things. So that's the first indication that the thalamus isn't just relaying stuff in a stupid, passive way. And the second whole line of work, which many people are working on, but I think some of the most awesome work on this topic is done by our own Mike Halassa in this department. And he does these incredible studies that you can do in mice with these spectacular methods that we can't use in humans, where he can really take apart the circuit and magnificent detail. And he's showing that the thalamus is involved in all kinds of high-level cognitive computations in mice. It's really stunning work. When the mice have to switch from doing one task to another, the thalamus plays a key role in gating the flow of information from one cortical region to another, OK? All right, moving along, the hippocampus, I you guys all learned about this. The number one gripe in this department as we learn about H.M. in every course. So that's going to happen here. But it's going to last about 20 seconds. So here goes. That's a normal slice of the brain like this. Here's the hippocampus on either side. It's like a whole curled up deal right there and right there. And here is H.M.'s brain, the famous H.M., who had surgery to remove his hippocampus on both sides, and completely lost his episodic memory for anything that happened after his surgery. OK? You all remember that, right? If anybody hasn't heard of H.M., send me an email. And I'll give you some background reading. OK, so very loosely, the hippocampus involved both in this kind of long-term episodic memory that H.M. lost. And it also plays a key role in navigation, which we'll talk about in great detail in a few weeks. And I just want to say that some cases are even more extreme than H.M. So there's a case of Lonni Sue Johnson. And I am trying to get you guys a video. And I didn't get it in time. But I'll show it to you later in the term if you're interested. Lonni Sue Johnson had a viral infection that went up into her brain. She was an extremely accomplished person. She did illustrations on the cover of The New Yorker. She was a pilot. She had her own farm in which she raised lots of stuff, a very smart, interesting, multitalented woman, who had this terrible tragedy of getting viral encephalitis at I don't know what age, but middle age. And she now does not remember a single event in her life. She's smart. She's funny. Her personality is totally intact. She can answer questions. She can paint. She can do all kinds of things. But she does not remember a single event in her life. That's pretty astonishing. Reflect on what it means to have the sense of self if you don't remember anything in your life. Yeah? AUDIENCE: Can she remember her name? NANCY KANWISHER: That's a good question. I'm not sure she. Might know her-- yes, she does know her name. Actually, it is evident in this video. But the video, well, so she doesn't remember. At one point in this video, she's asked, were you ever married? And she's lovely and sweet and gentle and kind of low key. And she's like, you know, just don't remember. I might have been. I might have been. She was married for 10 years. So that's the hippocampus. Important. You don't want to lose that one. Yeah? AUDIENCE: About H.M., if the hippocampus is used in long-term memory, why is it that it being removed caused him to not form memories? NANCY KANWISHER: Well, so long-term memory means-- it's a vague term. It means the formation and retrieval of memories that are going to last a long time. So in H.M.'s case, he can access a lot of the memories from before his injury. In Lonni Sue's case, she can't do even that. OK? All right, the amygdala, OK, amygdala is a Greek word that means almond. Because the amygdala is the size and shape of an almond. And so just for fun, we're passing around some almonds, my favorite kind. Have some almonds and pass them around. All right, OK, so the amygdala is involved in experiencing and recognizing emotions, especially fear. The simple statement that you should remember about what the amygdala does is just remember the four F's. You guys all know about the four F's, fighting, fleeing, feeding, and mating. OK, patient SM lost her amygdala on both sides. OK? She cannot experience fear. She doesn't recognize fear on facial expressions of other people. And she doesn't experience fear herself. OK? And so that's the striking piece of evidence on what the amygdala does. Her face recognition is normal, recognizing identities. Her IQ is normal. She's overly trusting of other people. OK? OK, so that's all you need to know about the amygdala for now. OK, let's talk about white matter, just brief review. Here's a kind of tunnel through a piece of cortex. OK, so my brain cortex is wrapping around like that. If we took a piece like this, just took a segment out like that, this is the outside of the brain up there. Cortex runs like this. And gray matter is the stuff on the outer surface that's full of cell bodies, OK? White matter are the axons, the processes that come out of those cell bodies and travel elsewhere in the brain. OK? Everybody clear on that? OK, so we got gray matter up here and white matter down there, mostly myelinated axons that have that layer of fat to make them conduct fast. And so you'll see bundles of white matter in the dissection. And so here's an actual photograph of the slice through a brain. So all that white stuff up there is white matter. OK, and so you might say, well, that's just a big bunch of wires. Who cares about that? That's a good question. But actually, the wires are pretty damn interesting and pretty fundamental. And so I'll just give you a few reasons. And you don't need to memorize every one of these. I'm trying to give you a gist of why we might care about this. And then there will be a whole other lecture on networks and connectivity later in the course. Well, first of all, white matter is 45% of the human brain, OK? So it takes up a lot of space, all those wires connecting one bit to another bit. And I would say we cannot possibly understand the cortex and how it works or any little piece of it without knowing the connectivity of each piece to each other bit of the cortex, right? Imagine trying to understand a computer or a circuit without being able to see the connections between the bits. Like it would drive you crazy. That's the situation we're in now in human cognitive neuroscience. It, frankly, drives me insane. But that's where we are. Next thing, the long-range connectivity of each little bit of cortex, some little bit right there in my brain, is connected to some bunch of other remote regions in my brain. And that particular set of connections is distinctive for that patch of cortex. So you can think of it as a connectivity fingerprint of a patch of cortex. OK, so one of the ways that the different bits differ from each other is by way of their connectivity fingerprints. And I'm going to skip the rest of these because we're going to get back to them later. And I'm going to run out of time. And I'm going to assign the TAs to sound the gong at 12:15. OK? Good. All right, now we're up to the cortex. This is really, laughably, shallow. But whatever, that's what we're doing here. So here's this cortex. And as I mentioned, it's a whole big sheet. And the different bits look really similar if you just look at them or slice them up. So how are we going to figure out how this thing is organized? Well, OK, now we're up here talking about cortex. All right, let's start with the easy parts, which you've already seen. You've already seen this up here. These colored bits, visual cortex, auditory cortex, somatosensory cortex, gustatory taste cortex, those bits are like the easy parts of cortex. Those are called primary sensory regions. There's also motor cortex right in front of sensory cortex. So those are the primary regions. They're primary in the sense of this is the first place that sensory information lands up at the cortex coming up from the senses, right? OK, and all of that input is wired through what structure? AUDIENCE: Thalamus. NANCY KANWISHER: Yes. Thank you. So how are these regions organized? Well, they have maps. Every one of these regions has a map. And each of them has a map of a different thing. So let's start with visual cortex, and we're going to talk about the map that lives in visual cortex. But the prior condition for understanding that map is to understand the concept of receptive field, which you should know. So I'm going to whip through it quickly. OK, so here is how you map the receptive field as a property of an individual cell in a brain. OK? So the classic way in animal neuroscience is you place an electrode in the brain next to a neuron in monkey visual cortex. OK? So here's this monkey. He's got an electrode right in his brain right next to a neuron in visual cortex. And every time that neuron fires, you get a spike. You hear a spike. OK, now you train the monkey to stare at a fixation spot without moving its eyes. OK, I can do this with humans without training you. I can just tell you, look at the tip of my nose. OK, so keep your eyes on the tip of my nose. I can see if you're looking elsewhere. So look at the tip of my nose. OK? OK, so you train a monkey to do that. That takes a few months. And then they can do that. And then while recording from neurons in his brain, you put stimuli over here, put a flash over there or a flash over here or a flash over here or a flash over here. OK, you can stop looking at my nose. It's not all that fabulous a nose, I realize. OK, so a receptive field is the place in the visual world that makes a given neuron fire. OK? So if there's a neuron in your brain that responds to a flash here but not a flash here or here or here or here, the receptive field of that neuron is right there. Everybody got that idea? OK, so in visual cortex, neurons have restricted receptive fields. They don't respond to anything anywhere in the visual field. They respond to a particular place in space. OK, if that's confusing at all, ask a question. Because it will come up again and again. All right, so that's what the rest of this slide says, what I just said. Blah, blah, blah. It doesn't matter. That's a receptive field. Different visual neurons have different receptive fields for different parts of space. Now here comes the important idea. In visual cortex, two neurons that are next to each other in visual cortex have nearby receptive fields. OK? So that's the concept of retinotopy or the map in visual cortex. So you basically have a map of the visual world in your visual cortex because there's this systematic layout just like you have in your retina. In your retina, visual information comes in. And because of optics, different parts of your retina respond to different parts of the image. But that information is propagated back through the LGN up to primary visual cortex where you still have a map of the visual space up in primary visual cortex. OK? So that map is called retinotopic in visual cortex because it's oriented like the retina. And so here's a particularly kind of gruesome but very literal depiction of this property of retinotopy in a monkey brain. This is an experiment done very long ago by Roger Tootell. And what he did was he used a method called deoxyglucose. And so what deoxyglucose is a molecule that's a whole lot like glucose. But it's got one little change in the molecule, which means it gets stuck on the metabolic chain. And so it gets taken up by cells that want to take up glucose. And then it gets stuck in there and can't be broken down. So it builds up in cells that are metabolically active. OK? So you can put a little radioactive tracer on deoxyglucose, inject it into a person or an animal. And what happens is it builds up with this radioactive tag on all the cells that were active. Make sense? OK, so Tootell did an experiment where he had the monkey fixate on a spot. And he presented this stimulus here. So the monkey's fixating right there. And this stimulus is flashing on and off. He injects the radioactive deoxyglucose into the monkey while the monkey's looking at this. And then, I'm sorry to say, he killed the monkey, rolled out visual cortex into a sheet. And there it is. And you can see the bullseye pattern that the monkey was looking at across the surface of visual cortex. Does everybody get that? OK, so that shows you very literally what a retinotopic map is in the brain. It's just like the map of the visual world in the retina. But there it is up in the back of the brain. And humans have this too. OK? And so this can be shown in humans with functional MRI. We'll talk later more about the methods of functional MRI. But here's a very high-resolution functional MRI experiment done by some people over MGH Charlestown. By the way, when I have names on slides, it's just because, in science, we don't get paid that much. And so our credit for our cool data is kind of all we have. And so I can't stand to talk about other people's cool experiments without giving them credit. I do not expect you to learn the names. It's just my little personal tic that I need to have their name there to give them credit, even though you don't know who they are. OK. OK, so what this guy John Polimeni did was show human subjects this stimulus here. They were fixating right there. And the stimulus is flickering with the dots kind of dancing around. And then he looked on the back in visual cortex on the surface of the brain, and he sees an M there. It's the same stimulus. It's just flipped upside down, which is not deep or interesting. The cortex has to be oriented one way or another. The brain doesn't care whether you turn it around, right? And your map of visual space is upside down in the back of the head. And you see that M. Does everybody get how that also shows retinotopic properties in the brain in human visual cortex? OK. All right, so the key idea of retinotopy is that adjacent parts of the visual field are mapped to adjacent parts of the cortex. All right, OK, a little bit of terminology just because people are fast and loose with these things. I've already referred to V1 and primary visual cortex. It's also sometimes called striate cortex. It's all the same thing. It's the part of the visual cortex where the information first comes up from the LGN right back here. So in me, it's right there. Most of it is in the space between the two hemispheres. But a little bit sticks out on the side. So in this person, that yellowy orange stuff, that's primary visual cortex, which is the same as V1 and striate cortex. OK? That's just terminology. All right, just as we have maps for visual space, we have maps for touch space. And so you've probably seen this diagram here of the map of touch space going across somatosensory cortex like this. So this is a picture of a slice like that, showing you which parts of the body are mapped out to which parts of space. And you can see that particularly important parts of the body get bigger bits of cortex. Yeah? OK, just as we have visual maps and touch maps, we have auditory maps in auditory cortex, which is right on the top of the temporal lobe right in here. And what's mapped out in auditory cortex is auditory frequency, high versus low versus high frequencies of sound. And so you see that here's a piece of auditory cortex in one subject, showing you regions that respond to high frequencies, low frequencies, high frequencies. Here it is another subject, high, low, high, another subject, high, low, high. OK, so the point of all of this is that primary somatosensory cortex has maps. Everybody clear on this? The different sensory modalities map different dimensions. OK, so what about the rest of cortex? Like you can see, most of the cortex is not primary sensory cortex. Is the rest of cortex just mush? Or are there separate bits like primary sensory areas? And if so, do those other bits have maps? And if so, what are those maps of? OK? We just took you from 100 years ago to the cutting edge of the field is asking this question in lots of different ways right now. OK? OK, let's back up and ask, what counts as a cortical area anyway? I just posited that these primary sensory regions count as distinct things. They're like the things, right? They're separate things in the brain. OK? And if for no other reason, then they get direct input from the thalamus, right? OK, but let's back up and ask, what exactly is a cortical area? And we're going to consider this question by considering the three key criteria for what counts as a cortical area. OK, the first one is that that region of cortex is distinct from its neighbors in function. Neurons there fire in response to something different from the neurons in the neighboring region. OK, that's very vague right now. But we'll illustrate that. The next one is-- I mentioned this before-- each distinct region of cortex has a different set of connections to other parts of the brain. It has a distinct connectivity fingerprint. OK? And the third thing is, for at least some regions of the cortex, they're physically different. If you slice them up and stain them and look at them really carefully, they might look a little different than other bits of the cortex. OK? So those are three of the key criteria that have been used to say, this bit of cortex, it's a thing, right? It's distinct, right? OK, so let's look at the classic example beyond those primary regions. Those are the most classic regions. Those are the primary regions we've already talked about. Those are the ones nobody would fight you on that. This one is next in line. Nobody would fight you if you say, visual area MT, that's an area. Well, they might. But most people wouldn't. OK, and then from there on out, it's all fighting all the time. OK, so let's talk about visual area MT. It's a little patch of the cortex in a monkey brain. This is a side view of a monkey brain. And in this human brain, it's that little patch right there. OK, so this region meets all the criteria to be a distinct visual area. So how do we know this? Well, we know this from lots and lots of different methods. So I'm going to whip through a few of those to give you a gist of how we can find evidence that that region is distinct in functional connectivity and the physical stuff, sometimes called cytoarchitecture. OK? All right, function, how would we know that region has a different function? Well, one way, the classic way is to record from individual neurons in monkey brains. So if you stick a neuron into monkey visual cortex while the monkey is looking at the stimulus that I'll show you in a second, you'll hear the responses of an individual neuron. Each click will be the response of an individual neuron to the stimulus. So let's play this thing, except it's not making any sound. Chris, can you help me? Oh, right. Duh. That part, OK, see when the bar of light moves this way, it makes a lot of firing and not when it moves the other way? Let's watch it for a second. Watch the bar move again. See? It responds less when it's moving in a different direction. Everybody got that? What is this area right there called? Yeah, this area right here in the middle. AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Exactly. That's the receptive field. That's the part of visual space that makes this neuron fire. OK, this neuron also has a property called direction. It's sensitive to motion, as you see. But it's also specific to specific directions of motion. Everybody see that? OK, so that's a direction-selective neuron in monkey area MT. And here's a way of showing, with data, what you guys just saw. This is a map of different directions in polar coordinates. And this shows you how much-- this is a single cell being described here. This is the direction selectivity of that cell, showing you that when the stimulus moves in this direction, you get a lot of firing. When it moves in this direction, you get less firing. And can everybody see how this plot shows you the direction selectivity of that cell? Make sense? Right. OK, so that shows you what you just saw in the movie. So this is one way to establish the function of visual area MT is stick electrodes in there and record directly from them when a monkey looks at different kinds of stimuli. And you see direction selectivity when you do that. OK, further, if you actually do this systematically, moving across next door bits of monkey area MT, what you find is that, as we said before, nearby bits of cortex respond to similar things, in this case, to similar directions of motion. So here's a little diagram. As you move across the cortex, you see a systematic change in the direction selectivity of neurons as you move across the cortex. So in MT, we have a map of direction preference, just as we had a map of spatial location in primary visual cortex. Make sense? OK, now because those neurons are clustered like that-- I forget what my next point was. No. Never mind. We'll get that in a second. OK, what about humans? OK, so here's a monkey brain. Here's a neuron in a monkey brain. What about humans? Can we record from single neurons in humans? What do you think? Do we ever get to do that? Yeah? AUDIENCE: Like neurosurgeons. NANCY KANWISHER: Yeah. Yeah. Neurosurgeons, very occasionally, enable us to record from individual neurons in human brains. It's the most awesome data ever. Of course, we only do it when the neurosurgeons have decided, for clinical reasons, to put electrodes in human brains. They need to do this to map out epilepsy before surgery. And sometimes those patients are super nice and say, yes, I'll look at your stimuli or listen to your stimuli while you record from my neurons. And then we get the most awesome data ever. But it's very, very rare. I don't know of any data where people have reported individual neurons in area MT in humans. Yeah? AUDIENCE: So how powerful should an fMRI be to be able to record such information? NANCY KANWISHER: Oh, we're getting there. OK, so given that we, very rarely, get to record from individual neurons in humans and we want to more generally if there is an MT in humans, what do we do? We pop subjects in an MRI scanner. And we show them moving dots or stationary dots. And we scan them with functional MRI. We'll go through the details of how this works more in future lectures. But what you see, basically, is this is a slice through the brain like this. And you see this region right here responds more to the moving dots. This is the response. This is time here. This is when the moving dots are on high response. And then when it switches to stationary dots, the response drops. OK, so with functional MRI, you can also find the visual area empty by the higher response to moving than stationary dots. Does that make sense, more or less? I mean, I'm not giving you any of the details. But for now, they don't really matter. OK, so that's cool. But does that tell us that neurons in human MT are specific for the direction of motion? Yes? AUDIENCE: Are the moving dots moving to a specific location? NANCY KANWISHER: They're moving in all the directions you see here. No, it doesn't. It tells us it's sensitive to the presence of motion but not the direction of motion. OK? So if we want to really know, is human MT like monkey MT or is this really human MT, we want to know, are the neurons in there not just responsive to motion but are neurons specific for particular directions of motion, OK? So how would we do that? OK, well, there's lots of ways of doing that. But actually, one of the charming things is you can do that without an MRI scanner. That is it won't tell you whether it's MT you're looking at. But we can ask the question of whether your brains have neurons that are tuned for particular directions. So for this demo, I want you to fixate right in the center. And do not move your eyes from that dot. And I'm going to keep talking for a while, while you keep fixating right on that dot. And so what I'm going to show you is something called an after effect. This is also known as the psychophysicist's electrode. Psychophysicists are people who just measure behavior. And from behavior, they can infer how individual neurons work. And that is about as awesome as it gets. That's much more impressive than just recording from the damn neuron. Inferring from very indirect data how the neuron works from behavior, now, that is pretty-- oops. OK, sorry. Look directly at my face. You see anything? I didn't see it stop. OK, we're going to-- oh, here we go. Oh, right. OK, just fixate on the center again. Sorry. I forgot this guy was going to stop. So keep looking at the center. And then when it stops in a little bit, then keep your eyes right on that dot. And you can see what happens. AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Oh, that's right. Good point. Yes, right now, it's alternating. Nothing's going to happen. But that's OK. We're going to have the whole experience. Keep fixating on the dot. It's good the TAs are on the ball. OK, fixate on the dot. Anybody see anything? Not really. That's OK. You're not supposed to. That's the control condition. It was alternating directions. OK? So I think it's going to start moving again. I'm not sure. Let's go back. Let's just start it again. OK, I'm sorry I blew it the first time. But let's just get this right. OK, fixate on the center and just keep your eyes right on that center. So this one, it's not alternating. And it's going to do this for around 30 seconds. And so the whole point of this is a way with behavior to ask the question of whether you have neurons in your brain tuned to specific directions of motion. And something as low-tech and simple as an aftereffect can tell you that. Keep looking. Did you guys see anything? What did you see? What happened? AUDIENCE: It wasn't moving exactly [INAUDIBLE] NANCY KANWISHER: Uh huh. Well, it actually should-- well, now it's doing something else. But it should shrink at the end. Did you guys see it shrink? OK, so that's an after effect. And the simple version of the story is that you are tiring out your neurons that are sensitive to outward motion while you stare at all that outward motion. And after you kind of burn them out and exhaust them, then when you look at something stationary, it looks like it's going inward. OK? And the general idea is you have pools of neuron-- the easiest way to account for that is you have pools of neurons tuned for different directions. And that's why, if you tire out one batch, you have a net signal in the other direction. Does that make sense? This is all very relevant to your assignment which is due tomorrow night at 6:00. This phenomenon was used in the scanner for that experiment. You can think about how you would use this phenomenon to ask whether there's direction selectivity, not just responses to motion, in human MT. Yeah? AUDIENCE: I'm just a little bit confused. So even when an image is completely still, like even if you're not detecting motion, those neurons are still firing? NANCY KANWISHER: That's a good question. But most likely, the simple cases-- this may have not worked beautifully, in part, because I screwed it up and didn't notice when it stopped. But if it works well, you should get a pretty powerful sense that after you see it expanding, then when it's still, it should seem to be contracting. So when that happens-- the reading assigned for today, tomorrow night tells you what happens in your brain during that time when you are looking at stationary stimuli but experiencing motion. So there's no motion in the stimulus. But there's motion in your percept. OK? So that's the question. All right? So read the paper and find out. Yeah? All right, so all of that tells us just that there are neurons someplace in your brain that are sensitive to the direction of motion. It doesn't tell us that they're in MT in particular. But the assigned reading will talk about that. OK? Right, a further bit of evidence is remember I said how, in monkeys, next door bits in MT have similar direction selectivity. That means you can also inject an electrical signal in a little patch of MT and give the monkey a net percept of a direction of motion. OK? If all the neurons were scrambled around spatially, so that there was no clustering of neurons sensitive to, say, this direction of motion, then stimulation wouldn't do anything. But if you train a monkey to tell you what direction of motion he's seeing and you show him just random dots that aren't moving in any direction and you stimulate one little patch, it'll tell you the direction of motion of the neurons in that little patch. And that is much more powerful evidence that that region is not only responsive to motion but causally involved in your perception of motion. OK? I'm a little obsessed with this distinction between recording responses and establishing causality. So we'll go over this in more detail later. But I want you to start getting used to that idea. Another way to test the causal role of area MT in motion is with patients with brain damage in area MT. So there's one famous patient who had brain damage right there, which is right where MT usually is. And she could not see motion. And she reports all kinds of things like difficulty crossing the street, difficulty catching balls, difficulty pouring water into a cup, OK, just as you guys saw earlier. That's called akinetopsia, right? Kinetics, motion. A, not motion, right? Opsia, eyes. OK. All right, so I started with these criteria for what makes something a distinct area. And one piece of evidence is function. And I just give you a whole bunch of different kinds of evidence for distinct function and visual area MT, that it's specifically involved in motion processing. And the two other criteria, which are getting short shrift, but I'll just toss them off. And we'll return to them. One is the distinct connectivity of that region. OK, so you may have seen this horrific wiring diagram of visual cortex in monkeys. I think it comes up in like half the talks in classes in my field. This is the one down here. And so there's lots and lots of different visual areas. And there's a whole fancy wiring diagram. And smack in the middle of this diagram, that's visual area MT. And if you blow this up and stare at it, you'll see that MT has a particular set of connections to other visual regions in cortex. And its particular set of connections are different from the connections of any of those other regions. It's part of its connectivity fingerprint or signature. And that's another piece of evidence that it's a thing. OK? It's not just another like amorphous bit of cortex. It's a particular thing in the brain. And finally, you might wonder, is that bit of cortex physically different? Are the cells in there different? Are the layers of cortex different in any way? And you may remember, from probably 900, about Brodmann areas. Like this dude Korbinian Brodmann sliced up lots of dead brains, looked at them under a microscope, and argued that there were 52 different parts just from what it looked like if you slice them up under a microscope. OK? So we called those Brodmann areas. And area 17, this primary visual cortex, comes from Brodmann's terminology. And so he argued that there-- he thought these were distinct organs in the brain. And he even inferred the specific histological differentiation of the cortical areas proves irrefutably their specific functional differentiation. Well, it doesn't. But never mind. Kind of sounded good. Anyway, that was his idea. And these kinds of distinct, kind of cellular, physical, anatomical differences are very salient for primary cortical areas for vision and audition and touch and motor cortex. But they're much muckier for lots of other areas. One important exception, which is why we chose this, is area MT. And so I'll end in one minute. But just to tell you where this is going, this is a flattened piece of monkey cortex rolled out like with a baking roller. No. I don't know. Something like that. So here's monkey cortex. And there's V1 and V2. And it's a big mess. But that big dark blob, this bit of cortex is stained with something called cytochrome oxidase. And that indicates metabolic activity. MT neurons are very highly metabolically active. And so here's a map of visual cortex. And that exactly is area MT. So area MT actually is histologically or cytoarchitectonically different from its neighbors and fits all of the criteria for a cortical area. OK? I went one minute over. I realize I threw out a lot of terminology. I don't want you to memorize too much. So I made a list of the kinds of things that you should understand from this lecture, the things that I think are important. |
MIT_913_The_Human_Brain_Spring_2019 | 18_Language_I.txt | so this is the lineup for today we're going to be talking about language today and on wednesday but i want to start with something that i gave very short shrift at the end of lecture last time and i'm going to give it short shrift again but in a slightly different way you'll need this for the reading which hopefully you've already tried started as a representational similarity analysis is subtle and rich and interesting and it's taken me years of revisiting it to get its full force so just keep going at it and hopefully every time you'll get it a little better so let me try another brief version of this so representational similarity analysis is sort of like a generalized case of multiple voxel pattern analysis that applies to other kinds of methods and it characterizes a bigger conceptual space so to remind you multiple voxel pattern analysis with functional mri is this business where you split your data in half so you have one set of scans where people are looking at say dogs in another set where they're looking at cats and a whole other separate replication where they're looking at dogs and cats you look at the pattern of response across voxels in each of those four conditions dog one dog two cat one cat two and you ask if the pattern is more similar for the two different splits of the data in the same condition dog one dog two and cat one cat two the diagonal here then in the two cases where they're different dogs to cats everybody remember that this should be like if you're having trouble with this come see me or the tas that's not good okay okay so now that's mvpa and you can use that to ask of a given region of interest in the brain or the whole brain if that the pattern of response in that region can distinguish between class a and class b that's what it's good for so that's worth knowing but it's kind of impoverished it's binary i mean cats versus dogs okay it's a dopey example i choose but whatever you choose is just going to be two things it only takes you so far in characterizing what's represented in that region you can make it richer if you force it to generalize so if these two are a smaller size and a different viewpoint from those and it still works then we show that there's generality train on one kind of condition test on a slightly different version of them that tests the invariance that's richer and more interesting but even so it's it's limited so representational similarity analysis is a bigger richer way of characterizing representations by looking at the pattern of response across multiple conditions not just two in their variations so instead of something like this we'd have something like this with a whole bunch of different stimuli or conditions that we scan people on and then we look at all the pairwise combinations how similar is dog to cat how similar is it to pig or horse or you know table or chair or whatever right so then we have all of these pairwise similarities which gives gives us a bigger you know a richer idea of what's going on there okay and so now we don't have to choose a binary classification in there we can look at that entire space we can think of this whole space as what is represe as our proxy for what is represented in that region of the brain okay so now that's cool so everybody sort of get the gist of how this set of pairwise similarities in a region of the brain is a kind of richer idea of what's going on in that region and what it cares about right everybody kind of got that sort of okay now chunk that matrix as one thing okay that's a representation of what's represented in this part of the brain but now we can take that unit and we can say we can do the same thing on a totally different kind of data okay so here's what we just did here's like some region of the brain voxels we can do the same thing in behavior now we can say okay you rate for me how similar is a dog to a cat on a scale from one to ten i don't know six or something okay how similar is a cat to a pig four i don't know right you can see you imagine you get some similarity space you could just get people to rate them and you could make a whole new matrix here okay now you're characterizing your conceptual space over those same items behaviorally by asking people how similar similar each thing are here we're comparing similarity of patterns of responses across voxels here we're doing it by asking how similar it seems to people behaviorally everybody get how that's a similar kind of enterprise or we could record from neurons in monkey brains and show them the same pictures and just look at the response across say 100 neurons in the monkey brain to a dog and a cat and a pig and so forth and then we could ask how similar is a response across neurons in the monkey to each pair of stimuli just as we did that across each pa each each pair of stimuli across voxels everybody got that so in each case we're getting a matrix like this now we can do the totally cool oh sorry we're not quite yet we can also do that not just on functional mri voxels in the whole brain or in one region but we can make separate matrices these are obviously all fake data i didn't take the trouble to make different matrices for each right but we could make different matrices for different regions of interest in the brain okay one for each voxels here what's their pairwise set of similarities across those stimuli voxels over here what's their pairwise set of similarities okay now we can correlate these matrices to each other okay so we can say for example we had a bunch of people do ratings and give us their behavioral similarities based over these stimuli and then we looked in some region of the brain and got the brain's similarity space and they're represent and their responses across voxels how similar are those to each other okay so it's like we've moved up a level each matrix is a set of correlations between each pair of stimuli but then once we have that set of correlations we can take the whole matrix and correlate it to another matrix this would be a way of asking in some region of the brain how well does the representation in this chunk of brain match people's subjective impression of that similarity space when you ask them about it everybody see how that's a way to ask that question okay we can also relate functional mri voxels to neurophysiology responses across neurons we can ask how similar is your ffa's let's not take the ffa your lo that likes object shape how similar is its shape space in your brain measured with functional mri to shape space in this part of the monkey's brain registered with neurophysiology that's pretty cosmic right we're asking if the monkey sees the world the same way you do in a sense for this method by using these matrices and asking how similar they are across species and methods yeah you can do whatever you like okay so you can take you know you can do garden variety functional mri like we've been talking about in here just like the haxby thing from 2001 that's when it all started right just get a vector across voxels for one condition a vector across voxels for the two condition and correlate them you can do that in responses across neurons okay but you can also do more exotic things you can train a linear classifier on a bunch of voxels and say how well can it decode the response to pig to the response to dog and you can put that number in the in that cell okay so you can do it different ways any measure of similarity or very confusingly there's an increasing trend to talk about dissimilarity not similarity by subtracting the r values from one i find that annoying but it's all over the literature and who cares whether it's similarity or dissimilarity it doesn't really matter they're both ways of uh collecting a representational space yeah are there any any debates in the demand that we should be available since this is like a correlation of coordination oh a million you're supposed to fisher transform it and do all that garbage and we're not discussing that in here i'm just trying to give you the idea i don't need to be dismissive i'm skipping over all of that stuff to just give you the gist of the idea you know for present for purposes in this class you could just eyeball that in that you'd say oh they're really uh no they're not identical i guess i did switch it i did switch a few of them oh okay anyway whatever for purposes in this class you could just eyeball them mathematically and r value we're leaving out all the details yeah okay and of course we can compare behavior in a person to physiology in a monkey or behavior in a monkey to physiology in a monkey and here's one thing you need for the reading i hope it didn't already stump you it's in a tiny part of one of the figures we can make up a hypothesis of what what's represented here we might say hey consider this patch of a brain maybe it represents the animate inanimate distinction in the ideal case that would mean all it knows is animals versus non-animals okay and so that would mean this should be the representational symbolism similarity space if these are all the animals they're all exactly the same as each other all the non-animals are the same as each other but animal any animal and any non-animal are different so this is a hypothesized similarity space of our guess of what's represented in a region a model of what we think is represented in a region and we can we can correlate that to any of these matrices to ask whether our hypothesis of what's in there is right does that make sense okay okay so why is that so all this whole thing so totally cool it enables us to compare representational spaces across regions of interest in the brain okay the ffa to the ppa do they have similar representational spaces across subject groups this batch of subjects and that batch of subjects without having to align voxels we're not aligning voxels we've left loxels behind we're only using these matrices okay we can do it across species across methods and across hypothesized models of what we think is going on like that okay so more generally this probes representations in a richer way we don't need to have just 10 or whatever i put there we could have if we keep subjects in the scanner long enough for monkeys in the lab long enough we can get hundreds of stimuli and really characterize a rich space and we're looking at not just two discriminations but lots um the key requirement for representational similarity analysis to be able to do all this cool stuff is the axes need to be the same so the stimuli that you're getting the similarity of need to be the same in the person doing behavior the person doing mri the monkey doing physiology the model if the axes are not the same then there's no way to correlate the matrices make sense okay we'll keep coming at this again and again you'll see it in the paper for tomorrow night and we'll come at it again uh in class on wednesday okay so that was all ketchup so today we are going to talk about language and let's start by reflecting on what an amazing thing language is so right now there's a miraculous thing going on i'm taking some weird abstract hard to grasp even for me kind of ideas someplace in my head god knows where somewhere in there and i'm trying to take those ideas and translate them into this bunch of noises coming out my mouth that's already pretty astonishing like what what does that idea look like who the hell knows how do you take an abstract idea and turn it into a string of sounds that's wild nobody really knows pretty much a damn thing about how that works fascinating mystery right but then that bunch of noises is going through the air and producing let's hope pretty similar ideas in your head wow okay we do this all day every day big deal but it is it's astonishing it's just astonishing that that works at all okay so that's the essence of language that's why it's so cool um and let's think about um how we're going to think about this so the first thing to note is language is universally human all neurologically intact humans have language there are about 7 000 languages in the world sadly this number's shrinking all the time they are all richly expressive including sign languages there are no kind of impoverished languages that don't capture the full richness of expressible human experience they're all equally rich language is uniquely human yes chimps and parrots can accomplish all kinds of cool things especially if you train them extensively but what they have is not anything really like language and to give you a vivid sense of this let's look at chaser the border collie and what i want you to think about as you look at this little video of chaser the border collie is what is the difference between your language abilities and chasers chaser's pretty damned impressive but you are more impressive so watch it and enjoy and think about how it's different from what you do of us burst with pride if our dogs can respond to two or three commands but what if we haven't begun to understand the possibilities of what the animal mind can really do our friend astrophysicist neil degrasse tyson is host of nova science now and he brings us big news from the frontier welcome chaser beloved six-year-old border collie of psychology professor john pilly good girl she was born to live in the scottish mountains chase and heard she go go john has taught chaser to tend an extremely large if unconventional herd of a thousand toys and she knows the name of every single one of these i hope i find this hard to believe so i test chaser's memory with a random sampling chaser find inky well she got one right find seal whoa and that one too in fact she got all nine right but what about a new toy she's never seen or heard the name of chaser's never seen darwin hasn't even ever heard the name darman so we're going to see if she picks out darwin by inference find darwin i have to ask her again okay chaser chaser chaser find darwin [Music] she did it chaser's never seen that doll before yet she settled on the one toy she didn't know by deduction it's similar to the way children learn language but how does chaser's ability compare with other species besides us chimps and bonobos are the animal kingdom's top linguists capable of learning sign language but very slowly they can solve some sophisticated problems but they don't always pay close attention to humans is he come here i see my dog my dog wants me to be around whereas a bunch of people they don't need me they're basically like hey you got any food can i get any food off of you they're not interested in making me happy since dogs do like to please that humans need to find a way to tap the potential in all of our houses okay put in the tub and dogs like chaser are just waiting for us to discover all that they can do smart dog [Music] and neil degrasse tyson is here with the astonishing chaser here tell me what you learned about animal behavior and child behavior who would have thought that the animals are capable of this much display of intellect i think we like thinking of humans as some top of some ladder and don't even imagine that other animals could even approximate what we do all right i think we all want to see what the demo can we do a demo of this sure goose okay can i do this one you can do this one chaser chaser find abc abc you did it we thank you and we want everyone to know that it's a truly remarkable nova tonight four meals reporting tonight on nova science now on pbs and to you and your broken dogs at home good night okay she's a very good girl and she knows a lot of nouns right a thousand nouns apparently um but what can't you do that you guys can do is this language yeah it's word identification it's not language you can actions how vibrating to be able to use verbs together okay okay that's good verbs and nouns together what else yeah there's some fortification of things if there were like a bigger abc and a smaller bbc type of thing that distinction would have been possible alex the parent can do that one i don't have a video of alex and i don't want to get too hung up on this but some animals can do that kind of stuff what else yeah yeah it's probably closer to like sound identification like how i can identify like the sound of a train or the sound of a car so just some rudimentary thing like visual form and sound how about when she found darwin sorry it wasn't that case just like like he said deduction it was just like it wasn't any other ones that's right that's right but that's pretty impressive isn't it turns out kids use that rule too in learning language it's a whole set of studies of how kids use rules to try to figure out what people are referring to when they learn novel words and that's one of the things that kids use is if there's a thing here that i don't know when somebody's saying a sound here i don't know that thing is probably goes with the sound yeah yeah i would say like i took 1985 last semester we talked about like an exact experiment where kids were able to like learn the words of toys that were like not like english words that were like dax and stuff but then when they were given like a new object they would be able to identify it as a different exactly mutual it's called mutual exclusivity and that's exactly what what chaser's showing here okay so pretty impressive but not fully language right more like memorizing a bunch of nouns plus mutual exclusivity plus some other stuff maybe um she certainly can't understand who did what to who and why right that's just this is not even in the ballpark this is kind of the essence of what we talk to each other about is this kind of stuff all kinds of complicated relationships between different concepts that we communicate in language so animals in not just taught english but animals in their natural environments communicate in rich and detailed ways with each other but usually in each case about a very restricted domain you know what what kind of danger is around what kind of food sources around those basic kinds of narrow things that are of survival value those are the things that animal communication systems usually deal with and in contrast human languages are open-ended and compositional right compositional means that we combine words to say new things things no human being has ever said before okay so that you don't see in animals okay okay so what is language cognitively that is what do you have to know to know a language okay a bunch of basic things one is phonology the sounds of language we've talked about this a bit in the case of speech perception just hearing the difference between a ba and a pa or seeing the equivalent gesture american sign language is a fully expressive natural language and there the phonemes are different pieces of hand movements rather than sounds but function as phonemes all the same okay we talked about specif a region of the brain that responds very specifically to speech sounds in humans okay moving up into the language system that's just the input system we have and by the way we also talked about the visual word form area a very recent addition to the input system in language but that's only a few thousand years old it's really phenology that's the native form of language that's been around for uh tens if not hundreds of thousands of years in human evolution so semantics we need to know what words mean that's lexical semantics but we also need to know how meaning arises when words go together okay and related to how words go together we need to know about the syntax of a language that is the structure or grammar of a language all right and so each language has a set of rules about how you string words together in that language and usually central to that not the only thing but a central part of that is word order and that whole set of rules for how you string together words following word order rules determines the meaning of the string of words for example shark bites man is different than men by shark okay and that just comes out of the syntax that we know that in english in this kind of construction the first word is going to be the agent the one who's doing the thing and the third word is going to be the patient the one who's receiving the doing okay and that's just built into your language system that you know that implicitly okay there's also the pragmatics of language that is how we understand what somebody actually means when they say something to us which isn't always just a function of the actual string of words coming out their mouth okay so if somebody says it will be awesome if you pass the salt it's not all that awesome to have the salt it really means please pass the salt right the pragmatics of the situation tells you the actual intent yeah okay and so to do pragmatics involves thinking about the other person's intent what are they thinking what do they want what's going on in their head and using all that background knowledge to constrain what do they mean by this particular utterance okay so that's just a survey of the main pieces of what we mean by language but for the next two lectures we're going to focus on the core which is syntax and semantics uh this stuff in here and i will sloppily use the word language to refer to this stuff not all the other stuff and and we'll focus really on the sentence understanding okay so what do we want to know about sentence understanding well the first thing we want to know is is it even a thing right is is language a thing separate from the rest of thought okay second thing we want to know is if it is at least something of a kind of a thing does language itself have component structure within it are there different parts of the language system that maybe do different things and if so what is represented and computed in each of those parts and third how do we represent meaning in the brain okay so these are the things we'll address over the next two lectures and let's start with this question that will probably take up the bulk of this lecture is language distinct from the rest of thought okay another way of putting this a more familiar way is to ask what is the relationship between language and thought or even more pointedly could you think without language you probably every one of you has wondered about that at some point so take like two or three minutes talk to your neighbors about this see if you can figure out whether you can think without language and then let's pull your insights talk okay if you guys all nailed it i'm sure you solved the whole thing right people have been talking about this for you know probably millennia so um what do you guys think what were some of your reflections on this question come on you guys yes quietly uh we okay i said that like i think that they could think of that language because of like we talked previously about how uh registration like babies are capable like very complex topic um he was like arguing that the oil research like there's also this thing that maybe it's like kind of like their own language that we don't understand but i don't think not really if you take three-month-old babies not really so perfect absolutely you already know you can hear this okay babies can think right you take 985 you'll learn more they can really think about all kinds of stuff it's really amazing how much they understand um and at you know three to six months there's like little or no language so there's a beautiful case of thinking without language yeah david on the other side right like if you don't give a name to something if you don't give a word to something then it's hard to really know it like maybe there are uh 20 different types of in the color green and if you don't decide to call one of an olive or another one khaki screening or something like that then you can't see the difference yeah well i don't know if you'd ever think of the difference you okay let's think about this do you think you could see the difference suppose i held up you know all of an olive patch and a khaki patch to you and for whatever reason you had been raised with deprivation of the words olive and khaki but somehow it's not about just the perception question it's about remembering yeah bingo bingo so that's roughly what the literature shows anya helped me out here i forgot to look this up the literature still show that perceptually you can discriminate them just fine it doesn't make a damn bit of difference if you have words for it but if you have to remember it sorry faster faster okay but accuracy of d prime i don't think is different maybe a little bit okay oops caught i meant to look this up i knew this was going to come up okay write me an email to look this up and help me find the relevant stuff anyway it doesn't make a huge difference perceptually but it does if you have to remember it for later yeah i was actually going to say because i'm actually reproducing the experiment that found that the difference there was a difference in cognitive in wait in perception or memory um so they found that i believe it was um because they've been a long history with this they find one thing and that's partly why it was like a difference in reaction time interesting enough they found that if they introduced interference in their like linguistic system then that difference went away so that's evidence that the language is causing the difference and that's in a perceptual discrimination okay first scholars yeah yeah well behavioral well yeah effects often are yeah as well i remember one of the first neuroscience talks that went to my college was a woman who had been obviously happy with that was terribly stroke and she was separated from people asia year after week and speaking and put all the language and would like to appear like here i remember the question that i asked is you know you have this really terrible thing right you know did you like what did your inner voice sound like and she said well i didn't know you have one and we've walked into a thing and then she said well i must have thought into images and feelings and the interesting thing that you know i experienced when i was reading my talk was that you know the more english i learned the more my thoughts comes with grammar so i still you know could have these thoughts but they were formulated in this way than they were when i had the structured language okay that's great so we're going to learn more about all of that absolutely okay very good um so cool question not obvious let's see what the data say so first of all you guys uh talked about uh babies and how they can think but animals can think too maybe not fully as richly as we can but they can think in all kinds of subtle rich ways and animals don't have language and so that's another case animals and infants and i'm mentioning numerosity because these are things we happen to have mentioned in here remember the approximate number system animals are great at that very young infants are great at that when they don't have language at all also by the way people whose language do not have any number words whatsoever can do approximate numerosity so here's a cool study from ted gibson's lab a few years ago they went down into remote parts of the amazon to study this group of people the para ha here they are in their canoe they are a hunter-gatherer tribe of just a few hundred their language is as far as linguists can tell unrelated to anyone else and it has no number words so there's a whole there's a whole dispute about that but the current view is there really no number words at all not even for zero or one okay so how do they do at approximate magnitude well let's see so here is the testing session down in the amazon and this is the experimenter lining up a bunch of i think they're batteries and this guy's asked to match the number of balloons to the number of batteries and he has to do it aligned this way so he can't just put them one next to the other if you let him he'll put them one next to the other but this is designed to test it better and he puts down four balloons bingo very good okay what no number of words in his language okay what about um this case i hate people oh the plot is thickening with a lot of thread he laughs he thinks that's pretty funny but watch this valiant goes ahead really well okay so he i think he gave nine for ten or something like that anyway um if uh if if i had any of you guys do this task and i prevented you from counting by having you do verbal shadowing or something else to tie up your language system you would do exactly the same as this guy does okay so the approximate number system doesn't require language doesn't require number words in your language to get the concept and it doesn't require use of language to do the task sorry he actually saw a man put all of him he saw yeah yeah just like you should yeah just just that i mean that's the actual experiment being conducted right there okay okay okay so we've just argued that at least the approximate number system is present in animals who don't have a number of words infants who don't and people who don't have adults who don't have number words what about other aspects of thought and what can we learn from studying brain disorders as isabelle mentioned a moment ago very rich source okay so here's the question we're considering we're taking language and thought or cognition and we're asking whether they're totally separate in the mind and brain or whether they're totally the same thing or whether there's some relationship but they're somewhat different okay so that's a question what do we learn from brain disorders well let's start for with developmental disorders and there are unfortunately a large number of these for example there are language savants people with down syndrome williams syndrome turner syndrome these are all developmental disorders in which people have very low iqs but notably in each of these cases very good language perhaps the most striking is williams syndrome these kids are remarkable they have very low iqs they can't do the most basic spatial reasoning tasks they can't cross the street safely they can't you know they they can't live independently at all and yet they're highly social and their language is almost indistinguishable from any of yours okay not quite if you test them subtly you can find some differences but it is rich and complex and it's it's bizarre because you'd think if your thoughts are so impoverished because your iq is low how could you have rich language but that's the weird thing about william syndrome their language is extremely rich and in fact poetic and quite beautiful and expressive so that's really surprising and suggests that you can have quite severely impaired cognition and very good language so that's the first crack that these things are more separate than you'd guess actually i find this one more surprising than all the others but in cases of brain damage which was the first mental function localized in the brain so this is historically important way back in 1861 paul broca stood up in front of the anthropology society of paris and he announced that the left frontal lobe was the seat of speech okay and this is on the basis of his patient tan who had a big nasty lesion right there in what became known as broca's area um tan was his name because after that lesion that was all he could say okay so broca this is back when the mainstream view was very much against localization of function in the brain there were people like franz josef gaul who were going around saying that different parts of the brain did very different things but goal was kind of a nut and he was not taken seriously by the academic elite whereas broca was a fancy member of the french you know academic societies and a muckety muck and when he announced that the left frontal lobe is the seat of speech everybody had to pay attention so it was big stuff importantly broken noted that tan wasn't globally impaired at thinking that tan could do all kinds of things even though he could not speak so he was already onto this critical idea way back in 1861 and he's just the most famous uh in that in that group there were a bunch of people before him in the decades before who were reporting similar kinds of dissociations okay so what would it be like to have intact thought despite impaired language okay so isabelle mentioned asking somebody who had a stroke here's a okay right so here's another case this is a case of this guy here tom lubbock who died a few years ago from a brain tumor in his temporal lobe that destroyed most of his language but it destroyed it gradually and this guy was a writer he was an art critic a critic for a major english paper and as he started to lose language he wrote about it and he wrote about it very beautifully and he said my language to describe things in the world is very small limited my thoughts when i look at the world are vast limitless and normal same as they ever were my experience of the world is not made less by lack of language but is essentially unchanged so that's a very powerful and surprising piece of writing it's a little bit mysterious because here's this guy writing beautifully and telling us his language is impaired so his idea of language impairment may not be mine i wish i could write that well nonetheless he's clearly reflecting on you know what is a very big loss of his previous language ability and i'm sure it was very painstaking to write these sentences and he's still telling us that even though he's lost a lot of language it has not changed his experience okay so that's just one subjective impression so that argues against this extreme view that they're the same thing right um but it leaves a lot of slop yes because he had a key point of speaking and learning about the award before yes a very important point absolutely so this is a case of somebody who had a lesion in midlife 40 50 something like that he had a whole lifetime of using language to learn and bootstrap all of cognition so absolutely we have to separate two different questions do you need language to become a normal intelligent functional human being throughout you know do you need it throughout development or once you've developed do you still need it to think and those are two very different questions and in fact absolutely you need language to develop if you reflect for a moment on all the things you know okay take a quick mental inventory survey all the things you know it's a lot of things right almost all of those you learn because somebody told you okay most of what we know we learn from language maybe you read about it right but that's somebody telling you in a different way right okay so language is crucial for development of cognition and for learning absolutely but now we're asking a different question of whether you need it whether it's the same thing in adulthood okay so the this guy is a little bit complicated because he obviously still has a lot of language left let's consider cases of people who have essentially no language due to brain damage so this is known as global aphasia and rosemary varley in england has been studying a group of three people i think she's got a few more but here are her three three main ones who have global ephesians she's been studying them for a few years and um sorry it doesn't show here at all sorry about this lousy projector shows on my screen they're big nasty lesions taking up a lot of the left hemisphere and basically knocking out you know essentially all the language regions in these three individuals and here's their performance on a bunch of different language tasks they have to look at a picture and name it they have to understand reversible sentences that's like uh boy kiss girl versus girl kiss boy they need to know who did the kissing and who got kissed right um and a whole bunch of questions like that and they are at chance at every one of these okay so these are people not just people who can't speak they're people who can't speak or understand language pretty much at all okay so it's as close as we can get to a case of a person who has no language ability okay so can these people think so rosemary varley has done paper after paper in which she finds clever ways to communicate tasks to these people to find out what kind of thinking they're capable of here's one you have to order this series of pictures okay so look at it for a second and you can figure out that it goes uh you know basically from right to left okay um so can people with global aphasia do this task yes they're perfect at it no problem whatsoever okay now you might dispute is that cause and effect is it knowledge of sequences are they different i don't know but anyway it's pretty rich task here okay here's another task look at these pictures and tell which of them are things you know and which of them are things you have never seen before that i drew okay takes a moment but you can figure it out okay top three things are real things and those three things are things i drew okay so we could ask does a person with global aphasia know the difference basically do you have to be able to name things to know the difference of what's a real thing and what's not here's another task which of these is the plausible event that's more complicated because here we need to know here we just need to know is that a real thing that i know here we need to know who's doing what to who and does it make sense so it taps world knowledge figuring out who's doing what to whom which many people think is at the core of language so how do people global global aphasia do perfectly at both of these things well not perfectly but the same as control subjects yeah um i'm just confused like how do you get the question like across what they need to do i don't know exactly but you do something like for example you ever play charades like that so like it's not exactly like could someone argue that like there's like actions that you're doing or some kind of form of like language their communication they're not language okay so when we say language we really mean language not necessarily noises coming out the mouth because american sign language counts um and we didn't have time to put that in this lecture which is a damn shame because it really does count in every way and is very interesting and uses similar neural structures and all that stuff um but language is different than communication there's all kinds of ways of communicating right yeah how old are these patients again uh i don't know exactly but they're probably it's almost always strokes they're probably 40 to 60. it's developed i mean it's it's not really it's not a sort of infant no no no these are all people who had brain damage in midlife or later in life yeah okay so that's pretty impressive okay um so basically these people with global aphasia are able to do every single task that rosemary varley has tested them on so i just showed you causality nonverbal meaning here's a cool one remember reorientation you should may well be on the final exam to remind you i did a whole like most of a lecture on this thing about reorientation remember rats and infants if you put them hide food there and put them in this box they later go 50 50 to the two corners even though that wall should disambiguate which is the exactly correct corner they should always go here they have the knowledge that it's there but they go 50 50. okay and remember i said that liz spelke has this interesting argument that the key thing you need to be able to solve that task is language right because in fact if you test adults and you tie up their language system they behave like infants and rats but if you don't tie up their language system they can do the task which is pretty suggestive that language is the crux of the matter however the global aphasics do this test just fine okay so now we have to go to min young's hypothesis which is that maybe the role of language in reorientation is learning about that whole spatial system during childhood which the global aphasics could do not maintaining the ability once you've gained it okay all right they can do i won't give you all the data on this but they can do arithmetic tasks logic tasks algebra tasks they appreciate music they can think about what other people are thinking okay so everything that all the kind of high level abstract quintessentially human abilities that we are impressed with ourselves for being able to do these people can do without language so language and thought are not the same thing okay you can still think in lots of different ways even after you use language on the other hand as what has already been brought up global of physics had language during development okay so saying that you don't need it as an adult is not the same as saying you don't need it during development you absolutely do need it during development okay because it's the key way we learn about the world and for example there are studies from rebecca sax's lab showing that deaf kids who learn language later for example if they're born not to deaf parents but to hearing parents who don't cotton on to the fact that it's important for them to learn asl early and hence they don't get language until later those kids are not as good at understanding what other people are thinking something that we usually learn about through language okay okay further even though i'm making a big deal about how you can think without language i'm not saying that languages are relevant to thinking okay every time i write a grant proposal i think oh god i have all these ideas in my head and i have to waste weeks and weeks and weeks putting them all down on paper to try to get money to fund my habit and then i get into like sentence three and i suddenly realize oh oh no i haven't been thinking about this clearly at all right so this is my very informal introspection on the role of language in my own thinking like even when i think there's a clear thought the same thing happens when i go to prepare a lecture it's like oh yeah i know the stuff that i've put together some slides i'm like slide two no i don't really know this stuff right um so there is some role uh for language and thinking and i'll give you one uh one example here um one one of the many things that language can do is to uh make information more salient okay so right now close your eyes everyone close your eyes i mean it i can see if they're open okay while keeping your eyes closed point south okay you may not exactly know where south is but make a good guess okay point use your whole arm so everyone can see when they open their eyes okay okay keep pointing but now you can open your eyes and you can look around and see where everyone else is pointing you guys are not bad not bad not bad but we got some over here a little turned around here it's roughly over there okay um so yeah hang on wait a second a little bit yes right um well hang on no i don't know yeah right it's over there okay um okay so you know your vector average was closer to the true thing than a random vector but not so hot if your language forced you to keep track of this you'd be better at it and we know that from the case of the palm para these guys here who live in australia aboriginal people and they spend a lot of time going around in the remote outback of australia where they need to know where they are and where who is going where and when is really of the essence in their lives and in their social interactions so when they run into each other they don't say hi how are you instead they say which way are you going and a typical answer might be north northwest in the middle distance how about you okay um they don't talk about things being left or right or behind them reference frames that have to do with the person's own body which are frankly really stupid reference frames because i can say this thing is to the left and then i turn and now it's not to the left anymore like how stupid is that right these guys have a much better system they would rather say oh you have a bug on your southeast leg right okay so these guys people who speak this language they have to be aware of absolute compass directions all the time just to speak okay and so they're oriented all the time unlike us and in that sense their language makes salient certain kinds of information it's not that you know we can't think about direction it's just that most of the time we're not aware because our language doesn't force us to think about it yeah okay so interim summary um we've been asking this question of whether thought is separate from and possible without language okay before you guys take off you wrote it on the board this board right here awesome okay uh you guys need to tell me when there's time to take the quiz so you're gonna have seven minutes because there's seven questions and so at um uh 12 18. um let me know and i will turn the board around okay 12 17 because it'll take me a minute to turn around all right thank you take notes tell me about that talk okay so here's a question we've been engaging in is thoughts separate from and possible without language and the the literature from a neuropsych patient says yes absolutely they're totally separate global of physics have many forms of thought without language so given that what would you predict from functional mri so if i told you which is true that these are the brain regions that are active during language tasks for example when you understand the meaning of a sentence what would you predict should they be activated only by language not by non-linguistic tasks what do you think take a moment to think about it these are the regions that are engaged when you understand the meaning of a sentence would you expect them to be engaged based on what i've just told you when you do mental arithmetic when you think about spatial orientations when you appreciate music no right if they're separate they're separate they should go on in different brain regions everybody have that intuition no you don't have that intuition i mean i mean you you think about things in terms of words even as a mental crutch even if you didn't have to okay fair enough fair enough so it doesn't nail this case it could well be that you have separate systems for all those other things but you still lean on the system not necessarily but you use it sometimes in fact there's evidence for that that we won't get to today but um okay but the initial thought is you don't need to activate it right okay well here's the surprise up until recently pretty much the whole brain imaging literature says that language overlaps with all of these things in the brain that the activations overlap in the brain they're all the same thing okay that's been the received story for 20 years or so of brain imaging and that just does not fit with the patient literature so we have a conundrum here just a few examples um stand ahead says arithmetic recruits networks involved in word association processes people who study music say regions such as broca's area and wernicke's area which have been considered specific to language are also activated by certain aspects of music thus the idea of language specificity has been called into question and on and on there's a million of these i just put a few of them up there okay so what's going on how are we going to resolve this contradiction on the one hand the patient literature suggests that language is separate from the rest of thought and on the other hand most of the neuroimaging literature says that if you look at those language regions you find them activated in all these other kinds of things one hypothesis is david's that they're activated but not essentially so but there's another hypothesis and that is that there's a methodological flaw with most of the prior research okay what is that methodological flaw it's an inappropriate use of something called a group analysis i've alluded to this a few times briefly but let me do it for real now okay let me first say it's not that a group analysis with functional mri is an evil thing that should never be done they have uses but particularly for the question of asking whether common regions of the brain are engaged in two different tasks it is not a good method for the following reason so first let's say what is a group analysis with functional mri it just means and again i'm going to be very sketchy with this because this is not a actual hands-on methods class i'm just trying to get you to understand the gist of the methods you take a bunch of scanned brains and you align them in a common space as best you can okay you can't do it perfectly because brains are anatomically different from one person to the next okay but you do your best to align them as best you can then you do an analysis across those aligned brains and you ask what is consistent across this group of subjects that's a very useful question to ask if we want to know overall what are the brain regions that are consistently activated when you understand language across this whole group of subjects that's a good use of a group analysis you'll find that picture i just showed you before with stuff going down the left temporal lobe a bunch of left frontal lobe stuff and that will be a very blurry picture of the regions that are most consistent across subjects yes charter you line them anatomically as a new map like sell side to each other or if you like blinded functions okay so therein lies a universe of options what i'm talking about now as a group analysis is aligning them anatomically and that's where the problem comes in and where we're going to go from that is you need to align them functionally okay if you just align them anatomically then the following can happen so you do a standard group analysis and you say for example let's do a language task an arithmetic task and a music task and let's suppose you find this basically broca's area vicinity is activated in an overlapping fashion in all three okay each of those is based on an analysis of 12 or 20 subjects aligned as best we can okay so that's basically what the literature shows is lots of stuff like that but here's the problem you can get that result in a group analysis even if the actual data looks like this in each individual subject no overlap at all in any subject but those regions are in slightly different locations and so if you average across this you get that everybody see the problem so it's not that it's a bad idea to do a group analysis it's a nice initial blurry picture of the approximate consistent locations in the brain for a given task the problem is when you say oh there's overlap therefore the same thing because you can get this result even if there's no overlap in any subject at all okay so the whole literature did this for 20 years and made all this talk about how language is on top of everything else in the brain and for a long time i was sitting by the sidelines going oh my god and then eventually um and federico came along and she knew about language and i said let's figure out maybe they're right maybe that's true or maybe it's like this let's find out okay so how do we do that um what you do is exactly what char dual mentioned a moment ago you align them not anatomically but functionally that's a whole reason to use functional reasons of interest we've encountered this before when i was carrying on about why we do functional localizers with the fusiform face area this is the same deal it's just that that insight started in the back of the head and hasn't reached the front of the head yet or it's about here so some people get it here and the farther forward you go the less people realize this is an issue which is really ridiculous because it gets more and more important as you go this way some stuff is actually aligned in the back and nothing is aligned in the front anyway so what do you do you do just what we did with the ffa in all the other regions one in each subject individually you identify those language regions okay you run some localizer it's like okay i got this and that and that and then once you identify them you can ask okay does that region and that subject show activation for arithmetic no that's next door right etcetera everybody got this this is really important i guess just because i'm obsessed with it maybe maybe it's not i honestly don't know if it's globally important or if it's just my personal obsession but you need to know it for this course we'll leave it at that okay um so this is standard and people who study vision and it's less standard in people who work in other domains but they're they're slowly cottoning on okay so how do we identify language regions in each subject individually there are lots of possible ways to do this but here's the here's the way i'm going to show you that's been used a bunch by federenko and others so we start by saying okay let's find candidate brain regions that respond to language okay which i told you by language i mean sentence understanding for present purposes so if we want to look at sentence understanding we've got to start with sentence understanding so if you look at the screen you'll see some of the stimuli we use okay so subject is lying in the scanner and they see that okay and then we can either give them a task or not and we'll talk about that in a second what are we going to compare it to well there are lots and lots of different things we could compare it to the control for different things but we started off with this if you read this here okay so the idea is it's visually similar you can hear the sounds in your head you can pronounce those things to yourself but there's really no syntax and no meaning okay not perfect but the first pass okay okay so when you do that you get activations that look like this here are four different subjects and you can see they're very systematic things see these three blobs in each subject and a bunch of stuff in the temporal lobe like that in each subject they're quite systematic but absolutely not identical okay all right so yes it's just what i did okay so now what do you do next well we just made this up sentences versus non-word strings who says that's a good thing to do okay so the next thing you do is you gotta validate your localizer task to make sure it isn't just like trivial in some sense so the first question is is it reliable okay so here's session one three different subjects activations we'll just scan them again there's a lot of talk about fancy statistics blah blah blah just scan them again okay wow look how similar these two little hot spots this elongated one i mean it's remarkable extremely reliable within a subject and yet somewhat different across subjects okay so check one reliable more interestingly does it generalize across task and presentation modality okay so before we just had people reading sentences and i keep saying reading is not the native form of language so let's replicate that reading and and now we're adding a memory task so at the end of each string a little probe comes up and you have to say was this word or non-word in the previous thing sequence and let's compare that to just listening to the sentences wow look how similar so that tells us that we're not studying reading or speech we're studying language after those things converge those regions don't care if you saw a word or heard the word they just care if you're representing the meaning of a sentence everybody with me why that's important okay all right check check okay does it generalize across languages suppose you're bilingual and speak two different languages okay here's two subjects who speak both english and spanish wow look how similar okay so it's really language in general not english or spanish or a particular language okay does it generalize across materials okay so we could have reading sentences versus non-words that we've been talking about here with two different runs in one subject or we're gonna have subjects listen listening to speech versus degraded speech like this here's the speech case during my days of house arrest it felt as though i were no longer part of the real world okay versus this [Music] okay so very degraded you can't understand what's being said but it has similar prosity and some similar structure and the point is you get very similar activations with those very different kinds of contrasts okay so now we have really validated this thing it checks out in all the ways it should it doesn't care about modality it does care about meaning and it does and it's highly reliable so now we can put it to use now we can ask what does each of those regions do all right so to do that in each participant then we find those regions with this localizer okay now let me just step back a second there's nothing magic about this localizer per se when you want to study something you use common sense you try something you validate it it may turn out later that oh the thing that we thought we were identifying language with this localizer it's got this other stuff and then maybe you refine your localizer and do something different so it's not that this is the only possible way it was just a sensible approach yeah okay so you use this to find those regions here they are in these four subjects and now um you can say let's find you know so that you have to figure out some way to say that thing corresponds to that to that to that and there's a whole bunch of math that was invented to do that but you know you can basically see it with your eyeballs that those guys roughly correspond and those guys roughly correspond the math is just a way to do that and then once you've found that region you can measure its response in a whole bunch of new conditions and ask what it does okay and in particular oh so yeah so this is different from a group analysis where you don't identify those regions you just choose regions anatomically okay so if we just align them and said okay that's a region well we don't have much of the language stuff there not much there a lot there not much there okay that's not great then we take another one and we define this okay this is a problem no language stuff here lots of language stuff there none lots not good okay everybody see how that's a problem okay i guess i'm flogging this we can move on now but the main problems with the group analyses are you might fail to detect neural activity that's actually there because it doesn't align well enough across subjects and so it doesn't reach threshold it's not consistent but for present purposes the more relevant problem is you might fail to distinguish between two different functions because they variably coexist within that region or not okay so we're not doing that for present purposes instead we're going to now go back to the conundrum of why do the patient studies suggest that language is distinct from the rest of thought but the past functional mri studies suggest that language overlaps with other functions in the brain and we're going to consider the hypothesis that if you study individual brains and localize those regions individually in each subject then the story might be different okay and it is so here's the task that federenko and i did a few years ago we came up with seven different tasks i won't bore you with all the details it doesn't really matter we just had lots of stuff arithmetic spatial working memory various cognitive control tasks working memory tests all kinds of stuff focusing on things that music focusing on stuff that other people had said overlaps with language in the brain okay and so first thing is you got to make sure those other tasks actually produce activations because it's easy to make up a task and have it not do much and then that's not very interesting so yes each one of those tasks produce lots of activation look at all that red stuff looks like a bunch of pizzas right okay so they produce good activations now the question is do those activations overlap with the language regions okay so let's consider two of them this is basically wernicke's area and broca's area two well-known language regions identified individually in each subject and now averaging the response over all the conditions here's a response when subjects read sentences and non-word strings sentences and non-word strings okay that's how we define those regions but this is in data that wasn't actually used to define those regions we held out some data just to cross-validate it okay now the question is how do those regions respond to all of these other things they don't pretty much at all okay so notice what's happened here the prior literature shows massive overlap between language and all these other things in our data when you identify those language regions in each subject individually and measure the magnitude of response in those other things they don't respond so this shows stunning specificity of the language regions consistent with the picture that comes from the patient literature from studies of brain damage language really is separate in the brain from all of these things okay everybody get that picture and the reason the literature had it wrong is they were mushing brains together and blurring the hell out of their data and drawing wrong conclusions okay i'm speeding up because i don't want to run out of time okay so we started with these questions here is language distinct for the rest of thought i'm saying yes language may be necessary to learn to think and it is indeed but the evidence from the neurological patients is pretty powerful global physics with pretty much no language can think in myriad sophisticated ways and when you do your functional mri studies right you find that the language regions in the brain in fact do not are not active during non-linguistic thinking make sense questions wow i finished on time you |
MIT_913_The_Human_Brain_Spring_2019 | 15_Hearing_and_Speech.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: We are turning from our various other topics to talk about hearing today. And let's start by thinking about all the cool stuff that you can do just by listening. So just by listening, you can identify the scene that you're in and what's going on in it, like for example, this. [AUDIO PLAYBACK] [END PLAYBACK] OK, so you know what kind of room you're in and roughly what's going on, just from that little bit of sound. You can localize events and people and objects. So close your eyes, everyone. Keep them closed. And if you just listen to me talking, it's really very vivid, isn't it, exactly how obvious it is where I am. And I will refrain from the temptation of coming up and speaking in somebody's ears because it's just too creepy. OK, you can open your eyes. It's very vivid. Just from listening, you know where the sound source is. You can recognize sound sources, so for example, sounds like this-- [GLASS BREAKING] You know what happened there. It's a whole vivid event just unfolded there in a whole second and a half, or a random series of sounds like this. [AUDIO PLAYBACK] - It's supposed to either rain or snow. [RANDOM SOUNDS] - Hannah is good at compromising. [RANDOM SOUNDS] [LAUGHTER] [END PLAYBACK] NANCY KANWISHER: Anyway, every one of those sounds you immediately recognize. You know exactly what it is. And that's environmental sounds, things that happen outdoors, speech, what is being said, voices, who is saying it. If you don't know the person, if they're male or female, young or old, much like faces-- if you know them, you'll recognize them pretty fast. You can selectively attend to one sound among others. Like if you had a little, hidden earphone that I didn't see, and you wanted to listen to your favorite podcast, you could listen to that occasionally when I was getting boring. And then you could turn back and listen to me. And you could just selectively choose which of those different audio inputs to listen to. And we'll talk more in a moment about this classic problem in hearing, which is known as the "cocktail party effect." I guess it was named in the '50s when cocktail parties were big. And it consists in the fact that when there are multiple sound sources, such as many people talking in a room, you can tune in one channel and then tune in another channel. And you can just selectively attend to one of many sound sources, even though those sound sources are massively overlapping on top of each other in the input. And it's a big computational challenge, as we'll talk about shortly, to do that. You can enjoy music. And you can determine what things are made of. So close your eyes and I'm going to drop things on the table. Don't look. I'm going to do various things and you're going to identify them. So let's see-- don't open your eyes. See if you can tell what's being dropped on the table, or at least what it's made of. Close your eyes. That's cheating. Wood, exactly. Very good. OK, what is this? Keep your eyes closed. What is this made of? - Plastic. NANCY KANWISHER: Yeah, good. Keep your eyes closed. What is this made of? STUDENT: [INAUDIBLE] NANCY KANWISHER: Yeah. OK, keep your eyes closed. What's this made of? STUDENT: [INAUDIBLE]. NANCY KANWISHER: Awesome. OK, you can open your eyes. Perfect. You guys are awesome. I just dropped these objects that I found from my kitchen this morning and you guys could tell what they're made of. That's amazing. OK, all of this that you guys just did happens from the simplest possible signal. We'll talk about what that signal is exactly in a moment, but it's just sound compression coming through the air. And it tells you all this rich stuff about your environment. So the question is, how do we do that? And the first question is, how do we start to think about how hearing works, how you're able to do all of that? And you guys know we start with computational theory-- considering what the inputs are, what the outputs are, the physics of sound, what would be involved if we tried to code up a machine to take those audio input and deliver the output that you guys all just delivered with no trouble whatsoever. What cues are in the stimulus? What are the key computational challenges? And what makes those aspects of hearing challenging? And then after we do all that stuff at the level of computational theory, we can, of course, study hearing in other ways, like studying it behaviorally. What can people do and not do? What's hard? What's less hard? And we can measure neural responses. So we'll talk about all of that. But let's start with a little more on what sound is. So sound is just a single univariate signal coming into the ears. We'll say more about that in a second, but it's really, really simple. And from that, you get all this rich experience. And so the question is, what goes on in that magic box in the middle to enable you to extract this kind of information from this really simple signal? So let's start with what is sound. Sound is just a set of longitudinal compressions and decompressions of the air coming from the source into your ear. So these waves travel from the source to the ear in little waves of compression where the air is just compressed, and rarefaction where the air is spread out. And just to give you a sense of how physical sound is, there's a silly video here. It's a speaker in a sink with a bunch of paint. And you can just see that the movement of the speaker-- normally, it makes those compressions and rarefactions of air, but if you stick paint on it. It's going to shove the paint up in the air, too, just to show you how physical it is. There's something called Schlieren photography, which is totally cool, and which is a way to visualize those compressions of the air to show you what's-- [VIDEO PLAYBACK] [INTERPOSING VOICES] - --use it to study aerodynamic flow. And sound-- well, that's just another change in air density, a traveling compression wave. So Schlieren visualization, along with a high-speed camera, can be used to see it as well. Here's a book landing on a table, the end of a towel being snapped, a firecracker, an AK-47, and of course, a clap. [END PLAYBACK] NANCY KANWISHER: OK, so just compressions of air traveling from the source to your ears-- that's all it is. So natural sounds happen at lots of different frequencies. And one of the ways we describe sounds is by looking at those frequencies. So there's an awesome website that is here on your slides. You can play with it offline. But meanwhile, we're going to play with it a little bit right now because it is so cool. So what we're going to do is we're going to look at spectrograms of different sounds. Let's start with a person whistling. [WHISTLING] OK, so frequency is on this axis, higher frequencies up here, lower frequencies there. And it's going by in time. [WHISTLING] So whistling is unusual in that it's pretty much a single frequency at a time. Many natural sounds are not like that. So you see not single, but a small, narrow band of frequencies at a time. OK, that's enough. [WHISTLING] Stop. All right. OK. [TROMBONE PLAYING] OK, so you see how with the trombone, there were many different bands of frequencies. In contrast-- this is me talking, by the way. We'll talk about that in a second. But with the whistling, you saw just a single band at a time. With the trombone, it has all of these harmonics, these parallel lines of multiples of frequencies. Those are called "pitched sounds." Sounds that have a pitch where you could sing back the tune have those bands of frequencies like that. And so that's what you see with the trombone. You see a little bit of this with natural speech here. You can see sets of bands, but mostly, you see vertical stripes. That's because I'm talking fast and mostly what's coming out is consonants. If I slowed down and stretched out the vowels, you would see more of those harmonics. Fun and games. OK, so that's what sound looks like. So everybody has to have a sense of this is showing you the energy at each frequency over time in response to natural speech. We'll play with this a little bit more later in the lecture. So we did all that. We'll do some of that other stuff later. So now that we have some sense of what sound is and what that input is, how are we going to think about how to extract information from it? What we want to do is think about how is it? Why is it challenging to get to that from this? There are several reasons that's challenging. First is invariance problems, much like we've discussed in the domain of vision and other domains already in this class. And so the way to think about that here is that a given sound source sounds really different in different situations. So if we have different people saying the same word, that will look very different on those spectrograms. The stimulus is actually different, even though we want to just know what word is being said. And conversely, if we have the same person saying two different words, that will look really different. And even if we want to know just who's speaking, we have to deal with the invariance of generalizing across those very different ways, very different sounds that they produce when they say different things. So those are kind of flips of each other. To recognize voices, we want invariance of the voice with respect to the words. To recognize words, we want invariance for the words independent of the voice. And those are all tied up together. So we need to appreciate the sameness of those stimuli across those changes. Here's another reason that hearing is challenging-- I mentioned this briefly-- in normal situations-- it's pretty quiet in the room. There's some background noise, but not a whole lot of other noise, so it's mostly just me making the noise in here. But in many situations, there are multiple sound sources. For example, listen to this. [AUDIO PLAYBACK] [INTERPOSING VOICES] - All right, Debbie Whittaker, Sterling James, wrapping things up. [END PLAYBACK] NANCY KANWISHER: OK, little segment of radio, there's music, and a person speaking both at once. And you had no problem hearing what the person was saying and knowing something about the gender and age of that person. You recognize the voice, the content of the speech, even though the music is right on top of it. So the music might be like this and the speech like that. And what you hear is this, with those things right on top of each other. So you need to go backwards to hear these things, even though that's all you get. Everybody see how that's a big challenge? If you had to write the code to take this and recover that, best of luck to you. Yeah, question? STUDENT: How does intensity or volume come into this picture again? NANCY KANWISHER: It's not really well depicted on these diagrams. This is just showing you the entire source. So the intensity I showed before essentially takes this and does a Fourier analysis of it so that it gives you the energy at each of those frequencies. So you could just do a Fourier analysis on this and you get a spectrogram. So the listener's usually interested in individual sources even though they're superimposed on other sources. And that's a real problem. So this is the input. They get added together, and the brain has to pull them apart. So this is a classic, ill-posed problem. That means just given this, we have no way to go backwards to that if that's all we have, because there's multiple possible solutions. It's like saying, "x plus y equals 9, now solve for x and y." And whenever we're in that situation of an ill-posed problem with multiple possible solutions, only one of which is right in any situation in the world, the usual answer is that we need to bring in some other assumptions or world knowledge or something, to constrain that problem and narrow that large, usually infinite, space of possible answers down to the one correct one. So this is a classic problem that people have talked about in audition for many decades. Josh McDermott in this department does a lot of work on it. And you can solve it in part by knowledge of natural sounds, which I won't talk about in detail here. One more challenge for solving problems in audition comes from the fact that real world sounds, including the sound of my voice right now, have reverb. So "reverb" means-- this is an aerial view. That's a person, kind of hard to see in an aerial view. And that's a sound source. And some of the sound comes straight from the sound source to the person's ears. But a lot of the sound goes and ricochets off the walls, god knows how many times, before it hits the ears. And all of those different paths of sound are all kind of superimposed at the ears. And they arrive at different times, making a hell of a mess of the input sound. So instead of that nice, clean, straightforward input, you have the input plus a slightly delayed input, a more delayed input, another delayed input, all superimposed on top of each other. That's reverb. Is that clear, what the problem is? So now we have this really messed-up signal that we're trying to go backwards and understand what the input is. So I'll give you an example. This is a recording of what's known as "dry speech." That means speech with no reverb. Sorry, question? STUDENT: I'm just having a little trouble understanding why reverb poses a problem. The stimulus isn't changing, it's just delayed over time. NANCY KANWISHER: Yeah, OK. Let's do a vision example. This is a little crazy, but let me just try this. Suppose we had a photograph of my face and you have to recognize it. OK, fine. Various visual algorithms can do that. But now suppose we took that photograph and we moved it over 10%, and we superimposed it and added them together, and then we moved it over again and added them together, and moved it over again and add them together. Pretty soon you have a blurry mess. And those things are all on top of each other, just as two people talking at once are on top of each other. And so you have a real problem going backwards. Does that make sense? OK. OK, so here's dry speech with no reverb. [AUDIO PLAYBACK] - They ate the lemon pie. Father forgot the bread. [END PLAYBACK] NANCY KANWISHER: OK, here's the same speech but with lots of reverb. - They ate the lemon pie. Father forgot the bread. [END PLAYBACK] NANCY KANWISHER: OK, now you can still hear it because your auditory system knows how to solve this problem. But look what happens to the spectrogram. Here-- this is time this way, frequency this way, and the dark bits are where the energy is, where the power is. In the dry speech, you see all these nice, vertical things, and here you see a blurry mess. Nonetheless, you can hear it fine. And further, what else could you tell from the reverb? STUDENT: The size of the room. NANCY KANWISHER: Yeah, it's in a cathedral or something, right? So it's not just that it causes a problem. Reverb also tells us something about the location we're in, if we know how to extract it, which you guys' visual-- auditory systems do. You can see I'm a vision scientist. So how to study this? There's a very beautiful paper that Josh McDermott published a few years ago. And I'm going to try to give you the gist of the paper without all the technical details, because I think it's just brilliant. So they wanted to characterize what exactly is reverb. And reverb is going to vary for different sounds. You heard the reverb in that cathedral-like space. That's very different from the reverb in this room, which also happens. It's harder to hear because it's less obvious. But you can tell a lot about the space you're in because the reverb properties are different. The distance to the walls are different. The reflective properties are different. And so there's information there. So you can characterize the nature of the reverb in any one location by making an instantaneous, brief click sound in that environment and recording what happens after that. And then you can collect all the reverberant reflections of that sound off the walls. So what they did is they went around to lots of natural locations and they played a click like this. [CLICK] That's it, just a click. And then they recorded. So this is the initial click, but this is what you record in a single location, all this stuff. And those are all the reverberant reflections of that sound off the walls-- make sense? For one location. So then they did that in a whole bunch of locations. And the idea is that here is a description of the basic problems, just the same thing I said before, but slightly more detailed. So a sound source would be something like this. This looks like a person speaking, with those nice, harmonic, parallel bands, like you saw when I was speaking. Maybe it's a trombone. So that's time. That's the source. That's what you want to know. This is now the impulse response function for the location where that sound is being played, determined by doing that click and recording. I showed you just you do a Fourier analysis of that black curve in the previous slide and you get something like this. And that shows you all the echoes that happen in that sound in that location. And there are different time delays, and different intensities, and frequency dependence. What comes to your ear is basically this times that. So you're given this and you have to go backwards and solve for that. Everybody see the problem? So what McDermott and Traer showed is that-- just to state the problem a little more clearly-- you're interested in the source and/or the environment. You might want to know what kind of room am I, if somebody is dragging you around blindfolded. You might want to know if you're outside, or inside, or in a cathedral, or a closet, or what. And now this should seem very analogous to the problem of color vision. Remember the problem of color vision? We want to know the color of this object right here. So this little, purple patch here, we want to know that, but all we have is the light coming to our eyes from that patch. And the light coming to our eyes from the patch is a function not just of the property of the object, but whatever light happens to be coming onto it and then reflecting to our eyes. And so in color vision, we have one set of tricks to try to solve that problem and recover the actual properties of the object, even though it's totally confounded in the input with the properties of the incident light. This is extremely analogous. We're trying to solve for what is the sound source. And we have to deal with this problem that is completely confounded with the reverberation of the room it's in. Does everybody see that analogy? They're both classic ill-posed problems in perception. So here's another way of putting it-- we're given that and we want to solve for at least one of these, ideally both of those. And you can't do that with just this. So we need to make assumptions about the room. And what Traer and McDermott showed is that first, they measured those impulse response functions in natural environments to characterize reverb in different environments. And they found that there's some systematic properties of reverb having to do with the decay function as a function of frequency. And those systematic properties are preserved across different environments. And then they showed that your auditory system knows about the way reverb works, in the sense that if you make up a different, non-physical reverb property and you play it to people, it sounds weird, number one. And two, they can't recover the sound source. And what that means is, built into your auditory system is knowledge of the physics of sound, and in particular about the particulars of the decay function of reverb, such that you can use that knowledge of how reverb works in general to undo this problem, and constrain this problem, and solve for the sound source. I didn't give you all the details. But I want you to get the gist. Do you get the kind of idea? OK. But as I said, that's only true for reverb that has the reverb properties of real-world sound. If you make up fake reverb, it doesn't work. And people can't solve this problem. That tells you they're using their knowledge. Doesn't tell us whether that knowledge is built in innately, or whether they learned it, or what. All right, good. So in other words, we solve the ill-posed problem of recovering the sound source despite reverb by building a knowledge of the physics of the world into our auditory system and using it to constrain the problem. So we just said, why is this computationally challenging? Invariance problems, appreciating the sameness of a voice across different words, appreciating the sameness of a word across different voices. Separating multiple sound sources that come in simultaneously and are just massively superimposed on the input-- the cocktail party problem, also ill-posed-- and the reverb problem. So everybody see how these are three really big challenges for audition? Yeah. STUDENT: So was brain imaging as well a part of the [INAUDIBLE]?? NANCY KANWISHER: Nope. One could do that and ask questions about where that's solved in the brain. But the beauty of that study is that in a way, who cares where it's solved? I mean, it's kind of interesting, but it's such a beautiful story already just from actually, a big part of their study was measuring reverb. Nobody had done it before. They sent people out with speakers, and recording devices, and little random timers on their iPhones. And at random times-- how did this go-- oh, yeah, they had people had to mark the location they were in using their iPhone GPS and then that's right-- they didn't send people out with recording devices. It's too hard. And so then they sampled what kind of places do people hang out in. And then they went back with their impulse sound source and the recording device, and they measured that impulse response function in lots and lots of different natural sounds in order to characterize what is the nature of reverb in the world. Nobody had done that before. So that's why I tell you this, is that to me, it's just one of the most beautiful examples of computational theory-- no measurement in the brain. A big part of the study was just characterizing the physics of sound, and then some psychophysics to say actually, do people use that knowledge of how reverb works in the world? And yes, they do. So I've been talking about hearing in general, but let's talk about one of the most interesting examples of hearing, the one you're doing right now-- speech perception. So what do speech sounds look like? You saw a few of them briefly before. Here are a few spectra. So just to remind you, each one of these things has time going along the x-axis, frequency here. And the color shows you the intensity of energy at that frequency band. So this is a person saying, "hot," and "hat," and "hit," and "head." That's the same person saying these four things, a person with a high-pitched voice. And here's a person with a slightly lower-pitched voice saying the same things. So what do we notice here? Well, first of all, we see that vowels have regularly spaced harmonics. That's the red stripes. This is a vowel sound right there. See those perfectly regularly spaced harmonics? That makes a pitchy sound, so voices are pitchy. You may not think that there's a pitch to my voice right now because I'm talking, not singing, but there is a pitch. And you use that, actually, in the intonation of speech, as you guys read about in the assigned reading for yesterday. So each of these things with the stacked harmonics is a vowel sound. It's got a pitch and it lasts over a chunk of time. And the consonants are these kind of muckier things that happen before and after. And consonants don't have pitch. They don't have harmonics. They have kind of muck. So there are certain band-- people who study speech spend a lot of time staring at these things and characterizing them. And they like to talk about bands of frequency, of power. And so this band down here that's present in all of these speech sounds here is called a "formant." It's just a chunk of the frequency spectrum that you hear with speech. So that's a formant. And some of those frequency bands or formants are particularly diagnostic for different vowels. So if you look in this range here, only in that mid-range here, only for "hat" and a little bit for "F" sound do you get an energy in that frequency band, not for "hot" or "hit." And that's true both for the high-pitched voice and the low-pitched voice. This frequency band here is really diagnostic to which of those vowels you're hearing. So we're going to play with that spectrogram again a little bit more, although I now have learned avoidance. So this is me speaking again, as you saw before. So I'm going to say an A, an E, an I-- look how different that one is, O, and U. And there's lots of other vowels. Do you see how that energy moves around for the different vowels? Now as I said before, if I do a long vowel like this, it makes a big, long bunch of harmonics. But a lot of the time, they're just these vertical lines. The vertical lines are consonants, t, p, k, r. If I don't say a vowel, you just see a vertical line. It's not quite a vertical line. They are different from each other in ways you can tell. So the consonants are those bands of energy that go vertically. And the vowels are the big, long harmonic structures that stretch between them. Now, I'm not sure you'll be able to do this. I'm going to need a volunteer in a second, and I'm going to pick on [? Iadun, ?] because he's most accessible right there. So come on up here. You know it won't be horrible or embarrassing. So you can stand here for a second. I'm going to say "ba's" and "pa's" and I'll tell you in a moment what to say. I'm not sure this is going to work. I tried it before. We're going to look at two different formants when I say "ba." Actually, I'm going to do it rising-- ba, pa, ba, pa. So there's two different formants, here and here, with both of those. I'm going to do it again. And there's just a tiny, little difference between a ba and a pa. And it has to do with the interval between the consonant, which is the first vertical thing, and the vowel, which is the horizontal stuff. So let's see if we can see it again. Here we go. Ba, pa-- do you see how the pa starts earlier there and the ba is slightly delayed? I'll show you diagrams that show you more clearly. OK, great. Don't go away. So we're going to do the cocktail party thing with the recording devices here. What is this? This is just some boring administrative thing you can just read. I actually brought it to crumple and make a crumpling sound, but we'll do that afterwards. Right now, you will read from that and I will recite something boring. And we'll just do it simultaneously. So just focus on what you're doing. And everybody watch. You can see my voice here. And let's see what happens when we're both talking at once. OK, here we go. Four score and seven years ago-- oh, geez, I forget how it goes after that, so I'll just have to make up some other random garbage. [INTERPOSING VOICES] STUDENT: --outstanding-- NANCY KANWISHER: OK. STUDENT: --review the student's course-- [INTERPOSING VOICES] NANCY KANWISHER: That's great. That's great. Don't go away. I don't know if you could tell that it got muckier when we were both talking. Maybe it's mucky enough with me talking fast to begin with. Let's try a few other things. Let's have me say words and you say words. And let's see how different they look. OK, so I'm going to say "mousetrap." STUDENT: Mousetrap. NANCY KANWISHER: You can see some similarity there, can't you? Let's do it again. Mousetrap. STUDENT: Mousetrap. NANCY KANWISHER: OK, that's good. It's funny, I see more low-frequency band here. I'm sure your voice is lower than mine. Pitch, interestingly, isn't just about how low the energy goes. It's an interesting, complicated property of the lowest common denominator of that whole frequency stack. So I'm not going to do pitch. It's complicated. What else do we want to do? Let's try some ba's and pa's. But let's stick them on the fronts of words. Maybe that'll work better-- pat, bat. STUDENT: Pat, bat. NANCY KANWISHER: Oh, I could see the commonality there. Could you guys see that? Let's do it again. Pat, bat. STUDENT: Pat, bat. NANCY KANWISHER: Well, yours look more similar. All right. Anyway, thank you. That's good. That's all I need, just to show you how hard this is, and how there's variability across speakers saying the same thing, and very, very subtle differences between sounds that sound totally different to us. So back to lecture. So you saw the harmonics, those red stripes, during the vowels. You noticed that I showed the consonants and the ba's and pa's. So here's a diagram. I'm sorry, this is very abstracted away from those spectrograms, which are messy, as you can see. The idea is that a consonant vowel sound, a single syllable like ba or pa-- this is time this way-- has this big, long formant which is a band of energy that's the vowel, the ah sound. And it's these transitions that happen just before that that make the difference for different consonants. And in particular, the difference between a ba and a pa-- this is a ba, that's a pa-- the difference we were looking for that didn't show up that clearly, but you can try it at home, maybe you can get it clearer than I just got it now-- has to do with that transition onto the first formant. So with a ba, the transitions happen in parallel. And with a pa, this transition happens before that lower formant. So that tiny, little-- it's a 65 millisecond delay in the case of pa that you don't have in the case of ba, is how you tell that difference. It's very, very subtle. So there's lots of different kinds of phonemes. We've been talking about vowels and consonants. Each vowel or consonant sound is called a "phoneme" if a distinction in that sound makes the difference between two different words in your language. And that means that what counts as a phoneme in one language may not be a phoneme in another language, because it won't make a distinction between different words. Many of the phonemes are shared across languages, but not all. We've talked about R and L that aren't distinguished in Japan, and two different D sounds that sound the same to me that are distinguished in Hindi, and lots of others. And so those are just variations across natural languages on which of those phonemes, which of those sounds, are used to discriminate different words, and hence count as phonemes in that language. So there's some particularly awesome phonemes that use a particular kind of consonant known as a click consonant. And these are common in some Southern African languages. And a year ago, I was traveling in Mozambique, which was just hit by a devastating flood. It's really awful. But anyway, I was there visiting a game park seeing all kinds of animals. And I met this guy, Test. And he's amazing. I mean, his knowledge of the natural history was mind blowing, but he also speaks, I think, six different languages fluently, one of which is Xhosa, or as he would say, [SPEAKING Xhosa] or something like that. You'll hear him say it in a moment. And so he was illustrating click languages. And I'll play this for you in a second. And he says there's a sentence in Xhosa which is a little bit crazy, but has all the different clicks. And it means, basically, "the skunk was rolling and accidentally got cut by the throat." Doesn't mean a whole lot, but listen to Test saying the sentence, first in English and then in Xhosa. [AUDIO PLAYBACK] - The phrase in English, it says skunk was rolling and accidentally got cut by the throat. In Xhosa, or in east Xhosa, [SPEAKING Xhosa]. [END PLAYBACK] NANCY KANWISHER: Isn't that awesome? I think we just have to crank it up a little bit and hear him again. [AUDIO PLAYBACK] - The phrase in English, it says the skunk was rolling and accidentally get cut by the throat. In Xhosa or in east Xhosa, [SPEAKING Xhosa]. [END PLAYBACK] NANCY KANWISHER: OK, for the most part, we don't have click consonants in English that count as phonemes in the sense of distinguishing different words. But we do have click consonants that we use in other domains. Anybody know what we use click consonants for? There's at least two. Know any click consonants? STUDENT: [INAUDIBLE] NANCY KANWISHER: Yeah, what? STUDENT: [INAUDIBLE] That's-- NANCY KANWISHER: Like what? STUDENT: [INAUDIBLE] NANCY KANWISHER: Yes, but that's a regular consonant. It's actually not a click. It's just a regular consonant. Well, one is when you go, tsk, tsk, tsk, the scolding sound. It's not a phoneme. It's not a word, but it has a very particular meaning. Another one is how you get a horse to giddy up. (CLICKS) So those are the click consonants we have in English. They're not phonemes, but we have them, and he's got a whole lot more. That was just for fun. So why is speech perception challenging? Well, one is the essence of it is that a given speech sound is highly variable. One way it's variable is that when you speak at different rates, all the frequencies go up and down and haywire, making them very different across different talking rates. Another is the context. So a given phoneme, like a ba, or a pa sound, or a vowel, sounds totally different depending on what phonemes come before and after it. They're not little punctate, one at a time things. They all overlap and affect each other in a big mess. And the third is one we've already mentioned, which is the big differences across speakers in the language. So you have to recognize a ba sound even though it sounds quite different when spoken by different speakers. So all of these things make it very computationally challenging to understand speech. Here's an illustration of that talker variability. So what's shown here is not a whole spectrogram, but just the intensity of the first formant and the second formant, those bands of energy that I showed you in the spectrogram. And so each dot here is a different person pronouncing a vowel. And each color-- this is one vowel here in green, in that green ellipse, with lots of different people saying that vowel. Here's another vowel up here in red, with lots of different people saying that vowel. And what you see is they're really overlapping. So that means you can't just go from the energy at those two formants, a point in that space, and know what the vowel is. What if you were right there? Well, then it could be any of four different vowels. So that's the problem of talker variability illustrated with vowels. Does that make sense? I think I just said all of this, blah, blah, blah-- another classic ill-posed problem in perception. You're given a point in this space. How do you tell which vowel it is? So one way we solve that is that we learn each other's voices. And we know how a given person pronounces a given set of vowels or words. And we use that to constrain what they're saying. Have you ever noticed, especially if you meet somebody new-- well, actually, you just experience this with Test. When he first speaks, his English is beautiful, but he's from Zimbabwe and he has kind of Zimbabwe, British-type accent. And at first it's hard to understand what he's saying. Did you all experience that briefly? I mean, that's why I put the text on the slide, so you would get used to his English and understand it. If I hadn't, you probably wouldn't have understood that sentence he spoke first. That's because we don't know his voice yet. But did you notice, after even just a few words, you start to like tune right in and you can understand him? So learning about an individual's voice helps you pull apart the properties of the voice, and unconfound them from the sound so you can understand what that person is saying. So that's part of how we solve this ill-posed problem. And so evidence that we do that is that if you have people listen to voices they don't know or voices that are changing from word to word, it's much harder to understand speech. So you imagine you took the sentence I'm saying right now, and you spliced in a different person saying each word. Actually, I should make that demo. One of you guys send me an email-- make that demo of a different person speaking each word in a sentence. It'd be really hard to understand. Because you wouldn't have been able to fix this was a property of the voice, now we can kind of separate that from everything else. Because the damn voice will be changing on each word. It'll be a mess. So that's one problem. So it turns out that the opposite is true, as well. And that is, your ability to recognize somebody's voice is a function of what you know about that language. So you can recognize voices better in a language you know than a language you don't because you're doing the opposite. You're using knowledge of the language and its speech properties that you already know to constrain the problem of figuring out who is this person's voice. So does everybody get this? These two things are affecting each other-- the speaker and what's being said. And because they're so confounded, massively confounded in the stimulus, to solve that, the more you know about the speaker, the better you can understand what's being said. And the more you know about the language and its properties, the more you can recognize the voice. Each one is a source of information about one of those two confounded variables. And so people have shown that psychophysically. And I think I have time to do this. Here's a kind of cool corollary of this, and that is, it's commonly thought that dyslexia is most fundamentally a problem of auditory speech perception, not a visual problem. There may also be a bit of a visual problem, but it's thought that at core, it's a problem of auditory speech perception. So if that's true, then you might think that this ability to use knowledge of the language and its sounds to constrain voice recognition would be reduced in people with dyslexia, because they are less good at processing speech sounds. And it turns out that's true. So here's a beautiful study from Gabrielli Lab a few years ago. So first look at the bars in blue. So this is accuracy at voice recognition, which person is speaking. And this is native English speakers who don't speak Chinese. They are much more accurate recognizing who's speaking when they're speaking English than when they're speaking Chinese. So that's kind of cool. That shows you the way in which you use knowledge of the language to constrain recognition of the voice. But now look what happens in the dyslexics-- no effect, exactly as they predicted. Given that the dyslexics have a problem with speech perception, they're apparently not able to use that knowledge of the phonemes of the language to constrain the problem of voice recognition. They're just as bad at voice recognition-- I'm sorry, they're no better at voice recognition in their native language than in a foreign language. They can't use that knowledge to constrain voice recognition. Does that make sense? Yeah, I love that study. So we haven't done any brain stuff so far. We were just thinking about the problem of hearing and speech perception, and what we know from behavior. And we've learned a lot already, but we'll learn more by looking at the brain, and the meat, and all of that. So let's start with the ear. Again, remember, compressions of air come into the ear. They travel through the ear canal. They hit the tympanic membrane. They go through a whole series of transducers, these three little ear bones here that connect to this snail-shaped thing, which is called the "cochlea." Cochlea is really important. You should remember that word. It's the place where you transduce incoming sound into neural impulses, way in there. And the cochlea is really cool. It's this, as I said, a snail-shaped thing. And there are nerve endings all the way along this thing. And because of the physics of the cochlea, there are different resonant frequencies at different parts of this snail. So basically, here are some low-frequency sound waves. This is the cochlea stretched out with the base and the apex. This is the base. That's the apex. And what you see is the low frequencies have transduced some energy at the base of the cochlea, and also at the apex. But midway range frequencies and high frequencies do nothing at the apex. This business, there's only physical fluctuations happening up here for low frequency sounds. So there's little nerve endings here that detect those fluctuations up there and send those signals up into the brain through the auditory nerve. And so in the middle, here or something, you have sensitivity to mid-range frequencies, not high or low. And at the base, it's sensitive more to high frequencies than mid or low. So everybody get that? So basically, the cochlea is doing a Fourier transform on the acoustic signal. It's taking these compressions of air, and it's just saying, let's separate those out into different frequencies, just with this physical device. It's like a physical Fourier transform that's saying, let's just physically separate the energy at each frequency range along the length of the cochlea. Does that make sense? And then once you get different parts of the cochlea that are sensitive to different frequencies oscillating to different degrees, then you stick some nerve cells there to pick up those oscillations, go up the auditory nerve, and travel into the brain. Everybody have a gist of how this works? So that's cool. But now, let's go up to the brain. So now, this is a view like this. And so here are the cochleae-- I guess that's the plural-- on each side-- ears, ear canal, cochleae. And the first thing to know, which is important, is that the path between the cochlea and the first step up in the cortex is much more complicated in hearing than it is in vision. Look at all these nuclei deep down in the basement of the brain. In contrast, in vision, how many synapses do you have to make between the retina and primary visual cortex? Sorry. One synapse. Right? STUDENT: Well, I was thinking-- NANCY KANWISHER: Yeah, two, that's right, so retinal ganglion cells send their axons straight into the LGN in the thalamus, make a synapse. And then those LGN neurons go straight up to primary visual cortex, just one stop on the way. Look at all the stops on the way here. So audition is a really different beast from hearing in many ways. Next time, we'll talk about how audition-- not these parts of it, but after you get up to the cortex-- audition, we in my lab and a few other labs are really starting to suspect, is profoundly different in humans from any non-human animal. And I think that's for very interesting reasons, but this part is pretty similar in animals, just getting information up to the cortex. And audition is already very different from vision just in the number of relays going up to the brain. So those structures down there do all kinds of awesome things. And last year, I talked at great length about how we detect the locations of sounds. It's absolutely beautiful work, and elegant, and fun, but I decided that was a little too much behavior. We should get on to the brain. But I recommend 9.35 if you want to learn more about audition-- awesome course. Did you take it? Really awesome course. Yeah, exactly. And so you'll learn more about all that stuff. So instead, we will just skip all that and go straight up to cortex. So the first place that auditory information hits the cortex coming up from the cochleae is primary auditory cortex, just like the first place visual information hits the cortex coming up from the eyes is primary visual cortex. So you can see in here that in a cross-sectional view like that, this is primary auditory cortex. It's in that sulcus right there. That's kind of a drag, because when we get occasional opportunities to test patients who have grids of electrodes on the surface of their brain, the grids don't usually go in there and we can't see primary auditory cortex. Although there are new methods where they stick depth electrodes, which is surprisingly, apparently, better on the patients. And right now your TA, Dana [? Bobinger, ?] is over at Children's Hospital recording from a 19-year-old who has bad epilepsy and who has depth electrodes in his brain. And he's listening to all kinds of sounds. And she's recorded his neural activity with depth electrodes. And so we are hopeful, one, that we can find some information that will be relevant to the neurosurgeons-- I don't know about that-- but two, that we'll get some information from those deep structures that you can't usually see when you have just grids sitting on the surface. So back to functional MRI-- so this is primary auditory cortex. It's quite stylized. Let me remind you where you are. This is an inflated view of the right hemisphere-- back of the head, front of the head, temporal lobe, all funny looking because it's been mathematically unfolded so you can see stuff in the sulcus where I just showed you. Primary auditory cortex is in the sulcus. But we've inflated it so you can see it. And so this is primary auditory cortex, this whole thing here. And it shows you a property we've talked about before. It's got a map, but the map in primary auditory cortex is not a map of space like it is in the retina for visual information. It's a map of frequency. And that makes sense because the input transducer is a cochlea, which already physically creates a map of frequency. And so that gets traveled through all those intermediate stages down in the basement, and it comes up to the brain, and makes a map of frequency space. So what this means, actually-- so here's sensitivity to different frequencies. And so the classic structure of primary auditory cortex in humans is high, low, high-- high frequencies, low frequencies, high frequencies, in that V-shaped pattern. So this is the right hemisphere. This is the left hemisphere that's been mirror flipped so you can compare them directly. And you can see this highly stereotyped pattern of high, low, high. That's a tonotopic map. Everybody clear on what a tonotopic map is? And we've just discretized it into two chunks, but it's actually a gradient of high to low to high, which you can kind of see by those intermediate colors in there. Yeah. STUDENT: [INAUDIBLE] why does the [INAUDIBLE]?? NANCY KANWISHER: Yeah, everything in the brain rearranges everything in the input in multiple ways. So we didn't talk about this, but in visual cortex, you have-- I don't know what the latest count is, at least 10, probably more than that, separate retinotopic maps in different patches of cortex-- map, map, map, map, loads of them. And so there's all kinds of transformations. And so much less is known about the functional responses and functional organization of auditory cortex than visual cortex, especially in humans where we really don't know a lot, in fact. So there's no real answer to that, other than it's not that shocking, in a way, because you see that in vision and in other domains anyway, with multiple maps that differentially represent different parts of space. And so yeah, I didn't say this, but many of those dozen or so maps in visual cortex have differential representation of different parts of space. Some focus on the upper visual field, some on the lower visual field. And the whole question of is that really one thing or is it two-- this is all now getting into the kind of cutting-edge, ambiguous state that we don't know. All right, everybody clear on tonotopy, primary auditory cortex? OK, good. All right, the standard view from recording neurons in primary auditory cortex in animals-- monkeys, ferrets are big in auditory neuroscience, other animals-- is that the receptive fields of individual neurons in primary auditory cortex are linear filters in the following sense-- so here's a spectrogram of a sound. This is just a description of the stimulus. As usual, time, frequency. So it looks like it could be a speech sound with some vowels there. Or it might be something else. Who knows. So that's a sound. So now, imagine an electrode sitting next to a single neuron in primary auditory cortex in, say, a ferret listening to that sound, and characterizing what does that neuron respond to. Well, the typical finding is that neurons in primary auditory cortex are what's known as spectral temporal receptive fields, or STRFs to their friends. So what does that mean? Here's an example of the receptive field that is the response dependence of a given auditory cell, again, with time on this axis and frequency on that axis. So what kind of sound does that cell like? Can you see just by looking at this? What kind of sound? STUDENT: Increasing frequency. NANCY KANWISHER: Increasing frequency, yeah, something like that right. Here's one that also likes increasing frequency, but slower, shallower increasing frequency. Here's one that likes decreasing frequency. Now, you may be wondering what the stripes are. We didn't talk about this in visual cortex, but this is a common property, that it likes this particular set of frequencies here, but is inhibited by adjacent frequencies. So you also see something like that with orientation tuning in primary visual cortex. And so here, these ones are changing faster, both increasing and decreasing. So the idea is primary auditory cortex in animals, and presumably in humans, is full of a bunch of cells that are basically spectrotemporal filters like this. They are picking out changes in frequency over time that happen to different degrees, and at different rates, and in different frequency ranges. Does that make sense, more or less? Yes, [INAUDIBLE] STUDENT: I have a question. NANCY KANWISHER: Yeah. STUDENT: [INAUDIBLE] how would you tell that was [INAUDIBLE]?? NANCY KANWISHER: Yeah, how do they figure that out? I usually spend all this time talking about the design of the experiment. I just skipped straight to the answer here. Well, I don't know exactly what you do, but you probably-- I mean, this has been a whole thing that went on for decades for people to get at this. So I'm guessing that somehow, they got into that general space, and then they generated stimuli that make all these different sounds. And they just run through them, and they find, for a given cell, you play all these different sounds. You go-- [MAKES SOUNDS],, et cetera. I'll spare you more imitations. You play all these different sounds to the animal and you record the response of that neuron. And you would find, for example, that it responds much more when you play that sound than any of the others. Does that make sense? STUDENT: No, it makes sense. NANCY KANWISHER: But how do they ever hit on that? STUDENT: No, what I was asking is that are they using separate [INAUDIBLE]?? NANCY KANWISHER: Oh, the red and the blue? How exactly they got-- rather than just the simple thing with just that-- how exactly they arrived on that, I'm not totally sure. I mean, there are mathematical reasons why it makes sense to have that whole thing rather than just a single stripe, that I think are beyond the scope of this lecture for the moment. But anyway, it wasn't just a totally arbitrary thing to try. Those are particularly useful kind of receptive fields for representing the input. So everybody sort of clear, approximately, what this idea is? So it's very low-level basic, just are the frequencies going up or down, and which range, and how fast? That's what primary auditory cortex does organized in this map, this tonotopic map. So think of primary auditory cortex as just this bank, this big set of linear filters for particular frequency changes over time. So that's all based on data from animals, from recording individual neurons. But we want to know about humans, not just because that's what this course is about, but we want to know about humans. I mean, ferrets are nice, but really! So is that true for humans. Well, Josh McDermott and Sam Norman-Haignere just published a paper a few months ago in which they addressed this question in a really interesting way. So here's the logic-- this is a little bit technical. I'm trying to give you the gist. I hope it works. Give it a try. So they generated synthetically, computationally, what they call "model-matched stimuli." So the idea is this-- the idea is if you present a natural sound-- like a dog barking, or a person speaking, or a toilet flushing, just some sound that you would hear in life-- and then what they do is they make a synthetic signal that matches that sound with respect to those STRFs I just showed you. That is, if you fed the original sound and you fed this synthetic sound into the STRFs, you'd get the same thing in the STRFs. So this is a way of saying, we're assuming that those STRFs are a good description of what goes on in A1, so let's test that by taking a big, fancy, real-world sound that has meaning and people know what it is, and let's make a control sound that matches the in-STRF properties. And let's see if we get the same response in the brain in that region. If that model is a good description of what that region does, then you should get a very similar response when you give the synthetic sound and the original sound that you recorded in the world. So they tested this on a STRF-like model, like this thing I just described before. And so just to show you what these sounds are like-- so here's an original sound just recorded in the world of somebody typing. [AUDIO PLAYBACK] [TYPING] [END PLAYBACK] OK, OK, OK, that's enough. I know it's riveting, but so then they run that through their STRF model. They get a STRF description and they generate a matched stimulus from their STRF description. And it sounds like this. [AUDIO PLAYBACK] [TYPING] Pretty good. It's kind of hard to tell them apart. Sorry, enough. [END PLAYBACK] All right. And you can see their spectrograms are really similar. So for a textury thing like typing, it really captures the essence of what's being heard. We're just telling you what these control stimuli sound like. Let's take another sound, a person walking in heels. And you can see all those verticals. Those are the clicks. Clicks have energy across lots of different frequencies. And that's what a vertical line means-- it means all those different-- remember, this is frequency on this axis. So a vertical line means energy at lots of different frequencies not organized in harmonics, so it's not pitchy. Here we go. [AUDIO PLAYBACK] [HEELS CLICKING] [END PLAYBACK] OK, here's the STRF version, the control stimulus. [AUDIO PLAYBACK] [CLICKING] [END PLAYBACK] So it captures some of it, but not all of it. It captures the sound of each click, but not the spacing between. So it's getting the local properties, but not all of the properties. Yeah. STUDENT: How did you say-- like just the [INAUDIBLE]? NANCY KANWISHER: How do they make it? I didn't tell you because it's complicated. They basically start with pink noise, or white noise, or some kind of noise. They run it through their STRF thing. They run the original sound through the STRF thing. They compare them. And they say, how are we going to adjust the noise to make it more like that? And they just iterate a lot, and they end up with these stimuli. And you can see just looking at it, they ended up with something that's pretty similar in terms of the spectrogram. Let's listen to a person speaking. Here's the original sound. [AUDIO PLAYBACK] - Is that art offers a time warp to the past, as well as insight. [END PLAYBACK] NANCY KANWISHER: OK, now I'm going to turn it off. Here's the synthetic version. [AUDIO PLAYBACK] [INAUDIBLE] [END PLAYBACK] OK, now we've lost something. So does everybody see how with keyboard typing, it really sounds the same, the synthetic version? With walking in heels, kind of, sort of, at least locally, but not globally, and with speech, we've just totally lost it. The stuff that you can capture with a STRF model does not capture the full richness of speech. There's something more in a speech stimulus than you can capture with that just simple STRF model. OK, let's listen to a violin. [AUDIO PLAYBACK] [MUSIC PLAYING] [END PLAYBACK] OK, what does the STRF model do with that? [AUDIO PLAYBACK] [MUDDY MUSIC PLAYING] [END PLAYBACK] I love that. It sounds like a sea lion colony. Anyway, so what you see is the STRF model totally fails to capture speech and music, but it captures textury sounds like that. And it loses some of the broader temporal scale information. So that's the stimuli. Then you scan people listening to these sounds. Just pop them in the scanner and play those sounds. And so then what they do is they just ask. So this is, again, the white outline is primary auditory cortex where you have that frequency map, mapped in a separate experiment, and just plunk down on the brain here. We're zooming in on that part of the top of the temporal lobe. And so what's shown here is, for each voxel, they're showing the correlation of the response of that voxel to the original sound and the synthetic, STRF-y sound. And what you see is those correlations are really high in primary auditory cortex. In other words, primary auditory cortex responds pretty much the same to the original sound and the synthetic sound. It doesn't detect that difference. But as soon as you get outside of primary auditory cortex, you get something totally different. And so that was exactly the prediction, is that model that's being tested here is a model of how they thought primary auditory cortex worked-- a bank of linear filters. They test that model by generating a new set of stimuli that are matched for those linear filters, and they get pretty much the same response in primary auditory cortex. So check-- that's a good model of primary auditory cortex. But also, the blue shows you much lower correlation out here. It is not a good model of stuff outside of auditory cortex. Josh. STUDENT: So isn't this kind of self-fulfilling, in the sense that I build my synthetic stimuli based on these kind of models, and then-- NANCY KANWISHER: It is, except the models were all based on animal work and this is human brains. So this is a way-- but that's exactly right. It's a way of saying all this work from animals precisely characterizing response properties of individual neurons, which you can do in animals and mostly not in humans, do we think that's true of human primary auditory cortex? And yes, it is. Does everybody get at least the gist of that? I realize I skipped over lots of details because I want you to get the general picture. Yeah. STUDENT: What are they trying to achieve by doing this type of [INAUDIBLE]? I mean, the hypothesis is that the human and the animal auditory cortex is the same? NANCY KANWISHER: Primary auditory cortex, yes. Yes. They're basically testing-- you derive that model from the animal work, then you design a test of it, which is making those synthetic stimuli. And I left this out because actually, I don't think they've done that, but presumably, if you test those stimuli with single units in ferrets, you get the same thing. You get very, very similar responses in primary auditory cortex to the original sound and the synthetic version of it based on the STRF model. STUDENT: It's predicated on the assumption that both of them are structurally the same. NANCY KANWISHER: Well, it's testing. It's asking that question. It's asking that question. Because I've occasionally in here lamented about how crappy our methods are in human cognitive neuroscience. I mean, they're fun. We can do something, but we hit a wall pretty fast. We want to see the actual neural code. We don't have spatial and temporal resolution at the same time. We pretty much only get that in animals. We can pretty much only do really careful causal tests in animals. We can pretty much only see connectivity in a precise way. And all these things we can do only in animals. And so we need to know if those animal models are good models for humans. And this is a way to test it. And it passed with flying colors. Make sense? So primary auditory cortex seems in humans that it's much like it is in ferrets, a bank of linear filters with STRF-y properties. What about everything else? After all, you guys can hear the difference between the original version and the synthetic version of the woman talking and the violin. And if I played you all the other stimuli of real-world sounds, you could hear the differences in many of the other ones as well. So what are you doing? Well, there's lots of auditory cortex beyond primary auditory cortex that could represent that difference. And what this is suggesting is, whatever's going on out here is doing something really different with those sounds. It is not fooled. It does not think the synthetic thing is the same thing as the original thing. That's what the low correlation means. So I'll tell you about just one little patch of cortex out there. And that is-- again, this is just for reference. We've zoomed in again on this is the little code for separate mapping of high, low, high, primary auditory cortex right there. And what the yellow bands are is selective responses to speech. So you compare a whole bunch of speech sounds to a whole bunch of non-speech sounds, and you get a band of activation right below primary auditory cortex. Yes. STUDENT: I thought the separation was low, high, medium [INAUDIBLE]. NANCY KANWISHER: High, low, high-- I probably said it backwards. That would be like me. But it's-- wait, wait. What the hell is it? I'm pretty sure it's high, low, high. Let's go back and look. I might have screwed it up on the slide or said it backwards, but I'm pretty sure it's high, low, high. STUDENT: So the low frequency is the [INAUDIBLE].. NANCY KANWISHER: Yeah, just like that's the code for frequency, right there. But ask me those questions because I'm very capable of getting things backwards, as you've probably already noticed. So there is a band of speech-selective cortex just outside of primary auditory cortex, in that region that we just saw responds differently to the original sound and the model-matched synthetic sound. So that's pretty cool. What do I mean by "speech-selective cortex?" What I mean is-- this is some of our data. I tried to find you someone else's data and I went down a 45-minute rabbit hole trying to find a nice slide. And I just couldn't find a good picture. I finally said, screw it, I'll show you my data, even though I'm trying to-- we're not the only ones who've shown this. We just have the best data. Other people had tested it with four, five, six conditions. We tested it with 165 sounds. So this is the magnitude of response in that yellow region to 165 different sounds, color coded by condition shown down here. And so what you see if you look at it is all the top sounds are light green and dark green. Speech-- notice, importantly, that the response is very similar to English speech and foreign speech which our subjects do not understand. So that tells us that this is not about language. This is not about the meaning of a sentence, or syntax, or any of that stuff. This is about phonemes, the difference between a ba and a pa, which you can do on a foreign language, even if there's a few phonemes that are different. You get most of them. Does everybody get the difference between speech and language? Amazingly, the senior author of the paper you read for last night does not understand that difference. He published a beautiful paper. Every time he comes here to speak, he talks about language, language, language, language. And I say, Eddie, have you ever presented a stimulus that's in a foreign language? He's, like, oh, no, that'd be really interesting. It's like, Eddie, until you do that, you don't know if you're studying language or speech. Oh, yeah, really interesting. And then he comes back four years later and he doesn't seem to know the difference between language and speech. I'm, like, hello. Anyway, he does beautiful experiments, but it's just-- it's a blind spot, or it's a misuse of a word. I don't know what it is, but it drives me nuts. Can you tell? Anyway, you guys get that difference even if Eddie doesn't. Let's look at some other things. How about all this light blue stuff? There's a lot of light blue stuff that's almost as high. Oh, that's music with people singing. That also has speech. The speech is slightly less intelligible because it's singing, and there's background instrumental music, so it's a little bit lower. Oh, what's next? We've got some light purple stuff and some dark purple stuff. This is non-speech vocalizations. That's stuff like laughing, and crying, and sighing-- pretty similar to speech but not speech. It's the next highest thing, but it's well down from the speech sounds. And then we have dogs barking, and geese, and stuff like that, that are yet further down. And then we have all kinds of other stuff down there-- sirens, and toilets, and stuff like that. Yeah. STUDENT: Is instrumental music perceived as speech? I mean, I can't make out the colors. NANCY KANWISHER: No. The instrumental music is way down in here. Yeah, it's a little hard to see. That stuff up there is non-speech vocalizations. It's not a perfect slide. So that's pretty strong evidence that that band of cortex is pretty selective for speech. Everybody get that? Yeah. STUDENT: So you're saying it's not like it doesn't process like the other one, so the violin stuff would still be that [INAUDIBLE] NANCY KANWISHER: Yeah, right. OK, good point. Remember when I first showed you the fusiform face area, I showed you that time where it's faces are like this, staring at dot is like that, looking at objects is like this. So I said, OK, there's a little bit of a response to things that aren't faces. It's just much more to faces. Now, you guys may not have noticed this because it went by kind of fast, but when I showed you intracranial data from the fusiform face area in that patient who got stimulated there, and saw the illusory faces, the intracranial data showed zero response to things that are not faces. So I think that that's because functional MRI is the best we have in spatial resolution in the human brain, except when we have intracranial data. But it's still blurry. It's blurry because there's blood flow and all of that. So I would guess the same thing here. In fact, I guess it isn't in the paper you read because he didn't have any non-speech sounds, but I will show you. Dana's recording them right now at Children's Hospital, and we have some other ones that I will show you next time, of intracranial electrodes. And they will be even more selective than that. But this is pretty good already. Yeah, Nava. STUDENT: What's the human non-vocal? NANCY KANWISHER: I didn't hear. What? STUDENT: The human non-vocal? NANCY KANWISHER: Oh, that's like clapping, and footsteps, and I forget what else, things where you hear it and you know that's a person, but it doesn't sound at all like speaking or speech. So if it was about the meaning, it could have been all about the meaning of people, could be something telling you there's a person there. Deal with it. But no, apparently not. So we're not the first ones to see this. We've just tested it with more conditions. So our evidence for selectivity is stronger than everyone else's. Given what I've told you today, can you think of a stronger way to test this? For example, suppose I was worried, maybe the frequency composition of the speech is different than the non-speech. After all, those are just recordings of natural sounds in the world that we went out and made, or mostly got off the web, someone else made. And maybe they differ in really low-level properties. And so how do we know that that's really speech selectivity, not just selectivity for certain frequencies or frequency changes? Yes. STUDENT: You could run it with the McDermott generate-- NANCY KANWISHER: Bingo, absolutely. Everybody get that? So then we'd know, because those are beautifully designed to match all those acoustic properties, match the spectrogram for all those lower level properties. And McDermott and Norman-Haigenere have done that. And this region does not respond strongly to the model-matched version, so it's not just the acoustic properties. Yeah. STUDENT: Can we also do something like [INAUDIBLE] play speech backwards? NANCY KANWISHER: Yes, people have done that, too. It's a little bit complicated, because speech backward sounds a lot like speech. It's kind of in the intermediate zone. So it balances many things, but one, it doesn't balance all the acoustic properties. So speech has certain onset properties. I forget how it goes, but if you play it backwards, there's lots of-- [MAKING SOUNDS] You've heard backward speech played, right? And so the STRF model would respond differently to forward and backward speech, whereas the STRF model responds the same to the original and the synthetic speech. Make sense? So there's a very speech-selective patch of cortex. And it's speech selective, not language selective. And of course, we want to know-- speech is lots of different things. It's what words you're saying. It's who's saying it. It's your intonation-- are you making a statement, or a question, or what are you emphasizing in the sentence? And it's lots of other things. And the paper you read asked that question. What's coded here about speech? And so I made a whole bunch of slides to explain what the paper said because I thought people would have trouble with it. And everyone nailed it, so I'm not even going to go through them. Maybe I'll just show one in closing. So one thing a few of you got wrong-- and I totally get why, it didn't matter-- is that here is this is one patient, and this is the bank of electrodes placed on the surface of the brain. The red bits are the bits where you could account for the neural responses in terms of any of those models-- intonation, speaker identity, sentence, or any of the interactions between those things. And so that just says that's where the action is, is those electrodes there. And that graph down here is from only three different-- each one is a single electrode, just so you get this. So this critical graph here, that shows electrode E1. That's one of those electrodes in one patient. An electrode is typically 2 millimeters on a side. It's probably listening to a few tens of thousands of neurons. So it's one or two orders of magnitude better than a voxel with functional MRI, but it's still averaging over lots of neurons, not a single nerve. STUDENT: The question [INAUDIBLE] averaging over [INAUDIBLE] but it's averaged over [INAUDIBLE].. NANCY KANWISHER: Yeah, that was the response of one electrode listening to male and female. I forget which is which. But other than that, you guys totally nailed it. And notice how precise, and specific, and fascinatingly separated the responses of those electrodes are, segregated for pitch contour, or speaker identity, or what sentence was being spoken. Those things seem to be segregated spatially in the brain at a fine grain. Whether you'd see it with functional MRI-- you might, might not. Many of you pointed out we might have not have the resolution. Think about other methods you might use to look for that, even if we didn't have the resolution with a simple binary contrast. And it's 12:26 and I'm going to stop. I will see you guys on Wednesday, and we will talk about music. |
MIT_913_The_Human_Brain_Spring_2019 | 9_Navigation_II.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: So we're talking about navigation-- how you know where you are and how you can get from here to wherever else you want to go. And last time we talked about just the general problems that arise in navigation, and we talked about the parahippocampal place area and other parts of the brain that are involved in navigation. So today we're going to continue that, but we're going to talk more about the actual populations of neurons in your head that are involved in doing this. And we'll talk about a particular aspect of the problem of navigation, which is called reorientation. That is what happens when you lose your bearings, and you need to figure out where you are again. Reset your internal map of where you are. And then we'll talk about the idea that this whole system for navigation, cool as it is and fascinating as navigation itself is, is even more interesting because there's increasing evidence that we use that same system for lots of other aspects of high level cognition that have nothing to do with space per se. OK that's-- and then we'll have a quiz, a short quiz. That's the agenda. Here we go. So the basic problems of navigation are, one, where am I? And two, how do I get from here to wherever else I want to go? And as I mentioned last time, we can break down each of these into a bunch of different components and facets of that question. So when we want to know where we are, that can involve recognizing a familiar location. So if you see a photograph or you were plunked down spontaneously in an environment someplace you know, you would visually recognize it, and that would be one way to know where you were. Like, this is my living room. Even if that location is unfamiliar and you're plunked down at random, you still have some idea of what kind of a place this is. Am I in a natural environment? An urban environment? Am I inside? Am I outside, et cetera? And finally, you would have some sense of where you are with respect to the immediate bounding structures in your immediate environment. Like, for example, where you are in this room. As I'm talking to you right now, I'm aware that there's a wall behind me. That kind of immediate spatial location. In terms of questions that arise when we have to figure out how do we get from here to wherever else we want to go. If you can directly see or hear your destination, then you have the simplest possible kind of navigation strategy. You just go toward that thing. OK, that's called beaconing, and it's like the minimalist case. Works great if you can see or hear your destination. But when you can't, you need to know, where am I in my broader understanding of the layout of my environment and where is my goal. And for that, you need a mental map of your environment, and we'll talk more about that today. That's why it's in red. You also need to know your current heading in that environment. It's not enough to know in my map of the world, I am here with a dot. You needs to know which way you're facing in that map of the world in order to plan your navigation, and we'll talk about that too. We also need to know what routes are possible from here. So I may want to go over to Stata and get a cup of coffee. But I can't go this way. I've got to go around because I can't go through that glass. OK, and so finally, this whole magnificent system that enables us to process all this stuff works pretty impressively. But every once in a while, something will go wrong, and it will get the wrong signal, and then we're lost. And so then we need a way to regain our bearings, and we'll talk about that too. So last time I talked about a bunch of brain regions that are implicated in perceiving scenes and in navigation. We talked about the parahippocampal place area right here and this region over here, formerly known as TOS, now known as OPA. You don't need to remember all that. It's the bit that's out on the lateral surface that we can zap because it's out there. And both of those regions seem to be involved broadly in perceiving the shape of space around you. We also talked a bit about retrosplenial cortex, that region that's hiding in the sulcus here that you can see better when you mathematically unfold the sulcus, there it is. Responds more to scenes than objects. And that region seems to be involved in something like getting your bearings-- that is the location and orientation of where you are with respect to your cognitive map and environment. OK, so to make that a little more vivid, I gave you one description of a patient before, but here's from another study. Patients with damage to retrosplenial cortex-- so here's from a recent article-- in every case, the patient with this damage was able to recognize landmarks in their neighborhoods and retained a sense of familiarity. I know that place. That's the coffee shop five blocks from my house. But despite that, none of those patients were able to find their way in familiar environments, and all but one were unable to learn new routes. So they can recognize the visual form of a particular place, but they don't know how to relate that to their cognitive map of the world and therefore plan a route from there. OK, so the part that I only alluded to at the end-- yes, question? AUDIENCE: OK, is the retrosplenial cortex the home to the cognitive maps, or is it-- NANCY KANWISHER: Great question. We don't exactly know. The typical story is that the home of the cognitive map is the hippocampus, which we're about to talk about next for reasons I will tell you. But all of this is a very active area of research. It kills me every time I do these lectures. I look at my old notes, and I think here are these 10 other awesome studies, and then I try to fit them in, and then they just don't fit. So actually one question I want to ask you guys after this lecture is, should I in future, either later in this course or in future courses, allocate even more time, or do you guys feel like OK, enough already with navigation. But I just think it's the coolest system. So there's lots of work exactly trying to answer that kind of question. And I'll give you a current snapshot of the approximate state, but all of this is in flux and very much actively investigated. OK, so cognitive map, what do we mean by that? Just to remind you of this classic study from the 1940s in rats, where the rat, when they learned this route and then went up here and found their gold block, the rat immediately comes out and goes straight toward the goal. Telling you they've learned something much more interesting than the series of left and right turns to get to the goal. They must have done something much more like actually learned the layout of space and the relative position of that goal so they could come up with a new vector to get there when the original route was blocked. OK, and you guys can do this too. When your route is blocked, you come up with a novel route. And you do that by having some knowledge of your environment, something tantamount to that in your head, some version of that. And further, you know where you are in that map. Like right now, you know where you are. Now here's the cool thing-- specific neurons in your hippocampus right now are firing telling you that you are right there. So these neurons are called place cells, and this is what they do. OK, so I'll be a place cell, or rather what I'll do is I will act out the activity of a place cell by a series of clicks I will make as I walk around. So imagine there's an electrode in my hippocampus, and you are hearing the activity of a single neuron in my hippocampus as I walk around. And here's what it would do. You'd hear background firing. So it's going to go click, click, click, click, click. Noisy background firing. Click, click, click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click. OK, I'm not going to go down there. Click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click. Click, click, click, click, click, click, click. Click, click, click, click, click, click, click, click, click, click, click, click, click, click, click. So that's one neuron that fires only when I'm right over there, that place. It's not where I'm facing over there. It's not what I'm looking at particularly. It's when I'm right there. OK, that's a place cell. And so there's lots of place cells in your hippocampus that do that, and they do it for different locations in your environment. And all of this was first worked out, of course, in rodents, who were running around who had electrodes in their hippocampus, but where those electrodes were connected with a loose tether so that the rodent could move around in their environment while recording from individual neurons in the hippocampus. OK, so that's the setup. And so I'm going to show you a movie of an aerial view of a rodent moving around-- a rat moving around in its environment. Can you see the little rat there? And what's happening is this video is tracing out the rat's path with the light gray. And every time that-- and it's recording from one neuron-- every time that neuron fires, it makes a red dot. So this is obviously sped up. But as the rat moves around in his environment, you see an accumulation like more firing when the rat is right there. It's not which direction the rat is going through when he goes there. Just basically whenever he passes through that in any direction, neurons fire more than anywhere else. And then if we take that and blur it, as scientists like to do to make nice idealized pictures, that is the place cell for that neuron. That is the place in space that that animal has to be to make that neuron fire. Yeah, question. AUDIENCE: Is it one dimension? I mean, can multiple places be mapped to the same neuron? NANCY KANWISHER: That's complicated. In an immediate environment like this, generally not. OK, I'll show you some examples in a moment. It's more complicated if you follow that cell when the animal moves to a new location. So let me say a few more things, and then if it's not clear, I'll take questions. Oops, we're going to see that again. OK, right so in answer to Sasha's question, here are a bunch of place cells from a rodent exploring the same environment. So you might say, well, there's a hotspot here and a little sub one there. But in general, most of these cells respond with a hotspot in a particular single location in this particular environment. OK, did you have a different question about that? AUDIENCE: Yes, so that all depends on the rat being conscious of the fact that it's in that place? NANCY KANWISHER: Uh-huh. Uh-huh. If the rat was anesthetized or if he was blindfolded and you passively moved him around in that space and he had no idea-- no way to tell where he was, that wouldn't work. However, if the rat knows the environment and then you do this in a darkened room where he's actively locomoting around, these things will still work pretty well because rats are very good at keeping track of where they are, even without visual cues if they know the environment. They'll have other cues like tactile cues, and they will know how far they went in each direction. Remember I talked briefly about the Tunisian ants doing dead reckoning. Keeping track of their vector and speed at each moment and integrating the whole thing to know where they are. That's called dead reckoning. Rats are pretty good at that too. Another question over here? Yeah. AUDIENCE: For the place cells, do they have a map as well [INAUDIBLE].. NANCY KANWISHER: We'll get there. Great question. We'll get there. I'll just give you the answer. No, they don't. It's too bad. They could have. They could have been all organized, but it's actually a little complicated. How would you organize them? What if you learned more stuff off of the edge of space? What if you had a whole other piece of your hippocampus? It would be inconvenient, so maybe that's why it doesn't work. Whereas with visual space, your retinotopic information always stays the same. We don't have to suddenly add a whole new part of retinotopic space, thereby screwing up our retinotopic maps in the brain. I'm just making that up as a possible reason. I don't know if that's why. Yeah, sorry, behind you David, tell me your name. AUDIENCE: Justice. NANCY KANWISHER: Yeah, right, hi. AUDIENCE: So I was wondering if you're in a smaller space comparatively or a bigger space, will areas of these specific place cells, what they're mapping to, will they also scale up right now? NANCY KANWISHER: That's a great question. I don't know the answer. My guess is they'll scale according to the space. So if my fake place cell fields that I just acted out over there is maybe five feet across, if I was then confined to a little space, you'd probably have smaller ones for that space, but I don't know. Let me say a little bit more about this. So just to cash this out, the place field is the location in space the animal has to be to make that hippocampal cell fire. OK, so let's distinguish that from a receptive field in visual cortex, which is a similar idea but a different one. A receptive field and visual cortex is the location in the visual field where a stimulus has to be to make a visual neuron fire. Not where the animal itself has to be, where the stimulus has to be. So keep those ideas separate. They're related but different. OK, so what about we and rodents tend to go around mostly on a 2D plane. That is we have buildings and trees and stuff. We sometimes go up in the z-axis, but mostly we live in a 2D plane, but that's not true of all animals. So recall the bat that I mentioned last time. These amazing flyers and navigators who fly in 3D and complicated trajectories and yet have amazing abilities to keep track of where they are over 30 to 50 miles that they fly at night and even as they change their orientation. Well, it turns out that in the hippocampus of bats, there's a bunch of work where people have put remote-- what do you call these things-- recording devices on bats, where you can remotely record neural activity in the hippocampus as the bat flies around. And it turns out that bats have place cells too, and their place cells, as they also can do this in a lab environment where they're flying around and you keep track of their location with cameras. So you know exactly where they are in 3D space. And it turns out that place cells in bats are three dimensional because bats live in a three dimensional world. So whereas rodent-- these would be a bunch of schematized place cells for different hippocampal cells in a rodent, these are different place cells for different hippocampal cells in a bat. Make sense? Bats need this. They need to know-- not making sense? OK, so the bat is moving around in three dimensions. Its place field isn't just like the one I did there. I can't act this out because I can't fly, but that place cell might fire over in that location. But then if the bat flew directly above it, it wouldn't. So it's got three dimensions. OK. OK, so I said before that I had one of those, and I acted it out. But what's the evidence for that? The evidence in humans came way after the evidence in rodents. Because as you can imagine, it's harder to arrange to record from individual neurons in human hippocampus. Nonetheless, as I've mentioned a few times there are occasional opportunities where a neurosurgeon has stuck an electrode in an interesting part of the brain for clinical reasons, and the patient and the neurosurgeon are nice enough to let scientists collect data. So I'm going to show you a really gross bloody picture. If that's going to bother you, just look away. OK, so this is neurosurgery. You take the skull off, you take the dura off. That's the direct surface of the brain. The neurosurgeons stick electrodes right on top of there. And in this case, they put them deep inside the brain. OK, the gross pictures are gone. We have just a nice clean X-ray here. So in these cases, this is a patient who's got an electrode sticking straight into the brain from the surface straight down n to the hippocampus. OK, kind of horrifying, but sometimes clinically called for. Seizures very often start in the hippocampus, so this is a commonplace for clinicians to put electrodes. And so what would you do if you had a patient who was willing to do your short experiment while hanging out in the hospital waiting to have a seizure with electrodes in their hippocampus? Well, you'd have them play a little game in a virtual space in some kind of-- you don't even need VR. You can use a pretty cheesy little video game, and I'm sure this one was quite cheesy. This study was done back in 2003. So they had patients navigate through a space-- this is an aerial view of the space. The patients didn't see that. They saw this front view, and they navigated around with the joystick in that space. And there were three visually recognizable locations in that space, and they had to do things to go from one location to another. OK, details don't really matter. So all the while, Ekstrom and colleagues are recording from individual neurons in this patient's hippocampus. OK, so here's an example of a place cell. So this is a diagram of the space I just showed you, with those three recognizable locations and other locations that the patient could virtually navigate through with the joystick. The red lines are the patient's trajectory as they moved around in that space. And the colors within each square are the average firing rate when the patient navigated through that location. And so this is the place field of that individual cell in this patient's brain as they went through this space. Because the firing rate there was around five hertz compared to three hertz for some other locations and mostly lower than that. OK, does that make sense? So just like the rodent experiment, but it's a person with a joystick looking at this space as they go through this virtual environment, and we're mapping out their place fields like that. OK, so that shows that humans have place fields in their hippocampus just as rodents and bats do. Yeah? AUDIENCE: Well, this is independent of landmarks? NANCY KANWISHER: That's a very complicated question. This patient had access to landmarks. They are seeing as they go through. So one could ask, for example, if you did it with your eyes closed and you had to go by dead reckoning remembering the left and right turns you had in a familiar environment, how well could these things go, they would go for at least a while. They'd probably go for longer in rodents because rodents are more accustomed to navigating in the dark. And they rely less on visual cues and more on other cues. But yeah, place cells aren't just visually responsive. So if we had, for example, if we set up a distinctive sound source in this corner of the room and a different-- like say somebody was singing quietly over here, and we tied a dog over there who was barking. And you walked around in this room with your eyes closed, you'd have a good way to keep track of your bearings as you moved around because you'd know that the singing was coming from here and the dog barking was coming from there. You wouldn't be seeing anything. Your eyes would be closed, but your place cells would work pretty well. OK, so whenever you have some basis for knowing where you are, no matter what modality is telling you that-- and usually it's many modalities-- those place cells will go. OK, so humans have these things too. So you can think of the place cell as the kind of "you are here" system that is the whole set of place cells. Any one place will only tell you are you in this particular location or not. But you have a whole array of them, then collectively, that whole representation across all of those neurons can tell you where you are in your familiar environment. OK, but if you want to not just know where you are but you want to go somewhere else, like there, you also need to know your current heading as we discussed last time. So it turns out that there is a whole other batch of cells that tell you what way you're heading. OK, these are called head direction cells, also first studied in rodents. And each head direction cell responds when that rodent is heading in a particular direction, not in another direction. OK, so for example, if we're mapping along the x-axis different heading directions. So the rodent is facing in different directions in his environment. You map up the whole 360 degrees, this would be the response of one cell as that rodent moves around. This one would be tuned to this particular direction. It would fire only when the rodent was facing this way, not when it was facing this way or this way or this way or this way. So does everybody get how where you are in space is different-- that's not a very good way to show this. Where you are in space is different from where you're aimed and headed in that location. OK, two orthogonal axes of relevant to your location. Yeah? AUDIENCE: So this isn't the angle of the head in respect to the body, right? It's the entire-- NANCY KANWISHER: I think I meant to look that up again because this question always arises. I think that there's some muck about that in the literature, which is why I never remember a clear answer. Usually in a rodent, especially the same. Because rodents can turn their heads a little bit, but mostly they're going to keep it aimed the way they're moving. So I don't know. This is a long, complicated excuse that I forget what the answer to that is. But send me an email, and I'll look it up. I meant to before this lecture. I just ran out of time. OK, most of the time, they'll be the same. Actually, I'm pretty sure it's which way your body is facing. Because if I turn like this-- well, anyway, I'm not going that way. Yeah. AUDIENCE: Have you found at least 360 cells for each angle? NANCY KANWISHER: You mean, are there cells for each? Yes, yes, they pretty much evenly tile the 360 degrees around the animal. Yeah, so collectively, that whole set of cells, just as a collective set of place cells, is sufficient to tell the animal where it is. A collective set of head direction cells is sufficient to tell the animal which way it's oriented. OK, I think we just said-- all these things are in a structure called the-- well, first found in the structure called the subiculum, which is part of the hippocampus. But since then, they've been found in lots of different regions. You don't need to remember that. So they get input from lots of different information. There's many different ways to know which way we're oriented. For example, too bad we don't have a rotating chair. If we did, I would have done the following ridiculous thing. I would have sat one of you in it, and told you to close your eyes. And I would suddenly turn it. And the person in the chair would notice that. That's your vestibular system that tells you if your body is being turned, even if you yourself don't decide to turn it. It will tell you if you get turned. That's another cue that provides input to the head direction cells, just as visual information does and potentially auditory information and lots of other kinds of information. So many different sources of information feed in to inform these head direction cells about the orientation of the animal. All right, so you can think of this as the brain's compass telling the organism what way they're facing. And lots of organisms have versions of this. In the fly, there's an amazing structure that was discovered just a couple of years ago, where there's a whole layout of this little neural structure-- I forget what it's called. But actually spatially in that structure, there's a little array of direction cells. So actually you can see it in a little spatial map of direction in that little structure in the fly. In humans and primates and rodents, it's not organized spatially like a literal map of direction. OK, so now we have where you are and which way you're facing. One pool of cells, place cells for where you are. Another pool of cells for which way you're heading. But those are just-- we're just getting going here. The coolest navigation-related cells are grid cells and entorhinal cortex. OK, so this is a slice of the brain like this, showing the hippocampus is folded up thing right here. And entorhinal cortex is just right next door. OK, so an entorhinal cortex, these things were discovered around a dozen years ago, maybe 15 years ago. And I'm going to show you a video of a rodent moving around his environment mapping out activity, like we saw before. But now we're in entorhinal cortex, and this neuron is going to be a grid cell, and you'll see why as it moves around in its space. Maybe. Come on. Here we go. OK, so there's a rodent. He's moving around. That's the tether taking the neural activity. The white dots are every time this one neuron fires, we're following one neuron this whole time. And the rodent is moving around, sped up video so you can see this happening. And at first, it looks like completely random. But as a rodent keeps migrating around in his space there, you start to see that they're like blobs in there. It's not totally random. They're particular blobs that are clustered. And oh my god, those blobs are organized in a hexagonal grid. It's a hexagon. Isn't that awesome? That's a grid cell. And whoops, here we go. We don't need to see it again. So this is a picture of what you just saw, the trajectory of the animal and the hot spots in that array. And here's a smoothed mathy version of where the firing is significant in that space, both showing you hexagonal grid cells. OK, so this is at first glance a very weird thing. Why would it help to have essentially a place field that has multiple different places that make it fire? OK, and actually somebody else before whether place cells have two hot spots. Place cells generally have one, but grid cells, as you see, have many organized in this grid. So the kind of circuitry and math of this whole system is mind blowing and super exciting, and the talk that I mentioned yesterday was on this topic. And many people are working on this, and they're working out like really deep interesting math about how you can take these cells, how they're arranged spatially in the brain at multiple scales and how you can use them to do path integration and keep track of how far an animal has gone along its trajectory. It's a little bit much for this course, but I'll just say the current thinking is what these cells enable us to do is to keep track of how far we've gone in each direction, and that's really crucial in navigation. We need to know where we are, not just by the landmarks we see. We need to know how far we've gone in a given direction and the thought is that that's the function that these grid cells primarily serve in navigation. And so that's especially important for dead reckoning, like integrating where you've gone according to your trajectories. OK, so you also need head direction cells at each point. So you can of the head direction cells as telling you the orientation of your vector and the grid cells of telling you the magnitude of the vector of how far you went. And then you take a whole bunch of those, and you integrate them, and you know where you've gone from your starting point. And lots of animals do all that math in their head. It's pretty complicated integrals, but they all do that. OK, so this is super awesome work. And fittingly, the 2014 Nobel Prize was awarded to the Mosers, a then husband-wife team, who discovered the grid cells and also to John O'Keefe, who discovered place cells decades earlier. And it's a super exciting line of work and continuing to be very exciting one. OK, so so far we've talked about place cells in the hippocampus, direction cells in the subiculum and lots of other places, and entorhinal grid cells and entorhinal cortex. And this is just a schematic diagram of where those locations are. The anatomy is complicated, and you don't need to know it. Know they're all in the hippocampus and its neighboring structures. That's good enough for here. Well, OK, know that the grid cells are in entorhinal cortex and the place cells are in hippocampus. That's worth knowing. OK, direction cells are kind of all over. OK, so that's cool, but there's one more cool kind of cell-- actually there's several more. The new one I never heard of was reported in this job talk yesterday, but we won't go there. We'll try to keep it simple. Another well-established one is called a border cell. So these are the place fields of three different neurons from an animal moving around in this space. OK, so you see how these are very interesting kind of place fields. They're not just a nice round blob. They stretch around a whole order of the animal's environment. OK, so does that make you think of anything? Does that ring any bells with other stuff we've talked about in here? I think we've talked a bunch about how the parahippocampal place area cares about the shape of space around you. Well, you might think that you'd really want to have awareness of where you are with respect to navigational barriers. It turns out border cells respond not just to walls. If you put a rodent in an environment where there's a cliff they can't go off, the border cells also respond to the edge of that cliff. OK, so any navigational barrier basically telling you where you are with respect to navigational barriers. OK, all right. Blah, blah, blah. OK, so as I mentioned in the last lecture when we talked about the parahippocampal place area, the shape of space around you has this kind of privileged role in many aspects of navigation. OK, so now we're going to talk about this problem of reorienting or regaining your sense of direction once you've been disoriented. And so, again, I mentioned this before. But just to give you in give you the intuition of what we're talking about here, you come up from the subway in Manhattan or any other environment that's rectilinear that you know and you know which stop you're coming at up at. So you kind of know where you are, but you come out and you don't know which way to head. You don't know which way is which. So that's a modern version of a classic problem that animals face in their environment. They may know where they are, but that doesn't tell them which way they're facing. So just to be really concrete about this, so here's an aerial view of a person. You're standing here. You have a cognitive map in your mind, and your place cells are telling you your location in that map. OK, so you know where you are in that map. And you're looking down a street, so you know that you're oriented with respect to some external axis like this. But you don't know how your mental maps should be aligned with that street. Are you facing like this, facing north in Manhattan, or are you facing south? All right, so that's the problem of reorientation is figuring out your particular orientation, not just your location but which way you're facing in a known environment. And we've all faced some version of this presumably at some point, and it's annoying. It takes a while to figure out. And then I don't know if anybody's had this experience. I've had it only in Manhattan because that's where this arises for me, but I'm sure there are other locations. Where you come up and you think you're going one way, and then all of a sudden, it's like your whole mental map goes kaboom. How many people have had that experience? It's very sudden and punctate. Yeah, it turns out that when that happens, all of your neurons flip together in unison. Like they're all in cahoots. They have one version of this. When you have that experience, it's because they're all flipping together, and I'll show you some data on that in a second. OK, all right. So there's a very evolutionary old system for solving just this problem. And it's a wonderful little piece of the literature that I'm going to spend a couple of minutes on because it's so classic and so cool. And this started with work by Randy Gallistel in the 1980s. And so what he did was he studied this problem of reorientation-- that is figuring out your orientation in a known environment once you've been disoriented. It's a very particular aspect of the problem of navigation. So he put rats in a rectangular environment, and he had them explore the environment. And then he hid some rat-relevant thing, like a little piece of food, say a chocolate chip in that corner. OK, rat sees that happen, rat is interested. Take rat out of box before they get to go take the chocolate chip, and then you disorient the rat. You don't grab them by the tail and swing them around, but you do some slower version. You want to make them sick. You do some slower version of that so they've lost track of which way they're facing. OK, now you put them in a new box-- new box because you don't want the smell to still be there, new box, and you see which way the rat goes. And you find that the rat goes 50-50 to those two corners. What does that mean the rat has encoded? He doesn't go randomly to any corner. He goes to corners-- he knew it was in a corner. He doesn't go randomly to any corner. Yeah, Ben? Jack, I'm sorry. AUDIENCE: Or you can turn it around to the left. NANCY KANWISHER: Say again. AUDIENCE: Like it's specifically in one of these directions. So the left [INAUDIBLE]. NANCY KANWISHER: You've got to say a little more than that. What's to the left? What's different about those two corners than the other two? Yeah, Isabel? AUDIENCE: Well, if he's looking at the shape of the room-- i.e. these two longer walls and two shorter walls-- he recognizes that the space [INAUDIBLE] he has to go to what looks like the right [INAUDIBLE].. NANCY KANWISHER: Exactly. He has to have encoded the axis-- the fact that the room is longer on one axis than another. And he's essentially encoded that chocolate chip was on the right side of the long wall or the left side of the short wall, and both of those corners are consistent with that. That's why he goes 50-50 to them. He can't go 100% of the time to the right corner because he has no information that would tell him that in this experiment. OK, everybody clear? So it tells you he learned where the thing is with respect to the shape of the room and its particular aspect ratio. OK, so now the plot thickens, and now they repeat the experiment. But this time, they make some very rat-salient asymmetry over here. You make a color and a texture, and you make other things to make this wall very saliently different. So you would think the rat, motivated to find the chocolate chip, would now go 100% to that corner when we put them in the new box with the same landmark cue over there. But no, the rat goes 50-50 to the same two corners. And in control experiments, many control conditions, you can show-- and I'll show you one in a moment-- the rat absolutely knows about this wall. He's encoded the presence of that asymmetric wall, so he has the information that should enable him to break the symmetry, but he doesn't use it. That's weird. You should be surprised. OK, everybody get why that's weird? He could have solved this one perfectly this time. He has the information. He's not using that information. OK, so that's weird. But then Liz Spelke and her colleagues came along 10 years later and said, let's try this with infants. And so they did the infant version, where you put the infant in a room with a symmetrical-- in a rectangular room, and you hide the doors so the infant doesn't have any cues other than the shape of the room. 18 to 24-month-old infants, and then you hide a toy in a corner, and you see what the infant does. Actually what you do with the infant is you make this wall really salient in all kinds of ways. In one case, it was red velvet, and they first showed the-- and these are, I guess, toddlers. They first show them that when you knock on the red wall, music happens. Totally cool, riveting for a little kid. They totally get it. They know all about the music wall. Very salient to them. Nonetheless, you put them in this experiment, and they behave just like rodents. They go 50-50 to the two corners. Even though they notice the red music wall, and it could have solved the problem for them perfectly. And they were motivated, but they didn't use the information. Everybody get why that's kind of interesting and kind of surprising? OK, now you might say, OK, rodents, infants, they're dummies. We wouldn't do that, us smart adult humans. Would we? But oh yes you would under certain circumstances. If we tied up your language system-- and there's lots of ways of doing that. One way is called shadowing. So it's like simultaneous translation but you don't translate. Try this sometime. I do this occasionally when I'm bored in my car just because it's amusingly difficult. Turn on the radio, listen to somebody talking, and just repeat everything they say after they say it. I'm not even translating. It's still demanding. So you have to be listening and producing. OK, running thing. OK, so that's called verbal shadowing, and it's an established way to really tie up your language system and take it offline so you can't really use it. When you do this experiment on human adults, if they're verbally shadowing and their language system is tied up, they behave just like rodents and infants. That is they use the shape of the space, but they don't use salient landmarks that could help them solve it perfectly. They go 50-50 to the two corners. They become rats and infants. We become rats and infants. OK, so Liz Spelke has spun a whole fascinating big theoretical story about what this really means. Well, let me just say a little bit more about this first before I do her whole big story. OK, yeah so the idea is-- so first of all, why would it make sense for rodents at least-- let's just consider the rats-- to use only the shape of space to reorient themselves when they're disoriented? At first glance, that seems really crazy. But if you think about rodents in natural environments, the idea is that actually in natural environments, features change. Snow comes and goes. Plants come and goes. Odors change. All those kinds of features of the environment can change, but the shape of the environment, like that there's a slope like this and a barrier here and a cliff there, those are more stable features of the environment. So it actually makes evolutionary sense for disoriented rodents at least to use the shape of space more than the features-- the colors and textures and odors of a space-- as landmarks to reorient themselves. Does that makes sense? And so the idea is that rodents have through evolution evolved this system for reorienting themselves when they lose their bearings that relies only on the shape of space so restrictively that even if another cube becomes relevant and important, they don't use it. And the further idea is that we have some version of this system in our heads as well. And as smart adult humans, we learn all kinds of other strategies to get beyond this. We're not trapped with only being able to use this one system to solve it. We can use other systems-- possibly language to help us say things to ourselves, like it's on the left side of the short wall. That's what Spelke thinks. There is some version in your head of, it's on the left side of the short wall, and that's why adults can do this when their language system isn't tied up. I don't think that's exactly right, but it's a beautiful story, and there's some evidence for it. OK, anyway, part of the reason I go through this whole thing-- well, one, I think these experiments are cool, but it's also been the basis of a core idea in cognitive science, and that idea is called informational encapsulation. So think about, it's just lots of syllables for a pretty simple idea. That you have this system for reorientation, and it is designed to use the shape of space around you as the cue that you use to reorient yourself when you're disoriented. That system is hardwired to do just that. And if some other part of your brain has information that could solve the problem, like the presence of a relevant feature that you could use, you don't have-- your re-orientation system doesn't have access to that information. It's informationally encapsulated. It only has access to the particular inputs that are hardwired into it. And so 20 years ago, a lot of people went wild with this and said that all the brain regions that I've talked about and cognitive systems that we're considering in this course are informationally encapsulated. It's kind of an extreme idea that goes far beyond functional specificity to say the inputs are extremely restricted to each region, and that's probably not true. But there's some limitations on the information that each of these processors were considering in this course has access to. And this is the classic evidence, behavioral evidence that some of those systems have very restricted inputs. Does that make sense, the idea of informational encapsulation? Not as an absolute truth about the brain, but as an idea that is interesting to consider individually for each of the systems we study. There's been pushback about the extremeness of this claim that infants and rodents only use the shape of space. There are circumstances where you can get them to use other information, but it's definitely true that the shape of space is the dominant cue for reorienting in rodents and in infants. All right, so when you're lost, as I've mentioned, there's two questions you need to answer-- where are you and which way you're oriented. This last stuff we were talking about is about which way you're oriented question. OK, and I just showed you some evidence for this general finding that the geometric cues, the shape of space are the dominant cues you use to reorient yourself, to get your heading back when you're disoriented. But do we really know that those cues are different for place recognition and for heading direction? So I've said, here are two different parts of the problem. But do they function differently? Do we really use different cues? Do we use the shape of space more for heading direction and maybe other cues for place recognition, for knowing where we are? OK, so I'm going to show you a very elegant behavioral experiment in mice that does this all at once in one experiment. So this is Josh Julian, a former lab tech in my lab. I get no credit for this whatsoever. I'm proud even though I shouldn't be proud. He was just an endogenously smart guy who went on and did an awesome experiment after he left my lab and went off to grad school, and here's his awesome experiment. OK, so he said, let's get mice to do both of these tasks. They have to know where they are and which way they're oriented. OK, we're going to do the same disorientation thing. Take them out, turn them around till they're disoriented. But these mice have to learn two different environments. OK, one environment has the vertical stripes on the short wall, on one of the short walls. The other environment has horizontal stripes on the short wall. So you do the same experiment. You bait one corner, and you see where the rodent goes does. Does he go to the two opposite corners? Exactly the same experiment, but he has to remember which room is-- to solve the problem, he has to know-- he has to discover, rediscover the vertical stripes or the horizontal stripes and act accordingly. Because when he's in the vertical context, the thing gets hid. He does this over repeated trials. The food gets hid on the-- hang on, let me get this right. Long wall on the left. Do, do, do. Yeah, right. Yeah, so when the long wall is on the left of the rodent. OK, that corner, the long wall is on the left. Everybody oriented? Whereas when he's in the blue context, the reward here, the long wall is on the right. OK, so he has to learn those two different environments and that the relevant shape cues are opposite in each. OK, everybody got that? OK, now what you find is that the rodent can learn that just fine. OK, so this shows that when you put the rodent in the vertical context in a room like this, they go more to these two corners than those two corners. Whereas when you put him in a horizontal context with horizontal stripes, he goes more to those two corners than those two corners. That tells you the rodent has used the orientation of the stripes to figure out which room he's in and hence, which two corners are the right ones. Everybody got that? But here's the amazing thing-- even though in this experiment, the very same animals in the very same trials, are using those stripes to figure out which room they're in, they don't use those stripes at all to break the symmetry and to go only to the correct corner, which they could do but don't. So once you've trained the rodents on these two things that the reward is here in the vertical context and they're in the horizontal context, you disorient them, you put them back in. You find that when you have vertical stripes, they go to these two corners-- I'm just repeating the data. When there are horizontal stripes, they go to those two corners. OK, they've learned that. But why do they go to those two corners? They learned the damn stripes. They used them to know which room they're in, but they don't use them to break the asymmetry and decide which is the correct corner. OK, so this is like a microcosm of everything I've been saying so far all in one experiment. The rodents are noticing those feature cues, using them to figure out which room they're in, where are they, but failing to use those features, the orientation of the stripes, to figure out which of the two corners is the correct one. They're not even encoding food is near stripes. Like, duh, that should have been easy. All right, so this is a beautiful-- I mean, this is more evidence for informational encapsulation of this system. Because it shows us on the very same trial, they used the stripe information to know which room. They failed to use it to figure out their orientation in that room. Is this sort of making sense? I realize it's kind of subtle. It's sort of simple and subtle at the same time. Yeah? AUDIENCE: So with cells that you showed us in the very first maps-- NANCY KANWISHER: What are they doing here? AUDIENCE: Yeah, exactly. NANCY KANWISHER: Great question. Let's look at that. That's what we're doing next. It's a great question. What are the damn place cells doing here? Great question. OK, let's say a little bit more, and then we'll think about what the place cells are doing. OK, so let me just restate, cash out the findings here. The mice are using the features to figure out which place they're in. Are they in this one or that one? But they are failing to use those features to figure out which is a correct corner. They're still 50-50 for the two corners, even though logically they have that information, and they could use it, and they should use it, they don't. So that means the mice are using features-- in this case, orientation-- for place recognition, but not for regaining their orientation within that place. I'm just repeating what I said before. Is that making sense? OK, so now David's question, what are the place cells doing here? Great question. Let's look. It's mice, so we can do that or Keinath et al. can do that and Josh Julian, my amazing former lab tech. So again, I get no credit whatsoever. So what do they do? They allow the mice to forage for crumbs in a box like this. OK, they disorient the mouse before each trial. Take them out, turn them around so he doesn't know which way he's facing. Put them in the box. And they find that place cells have a particular location in that box. Not surprising. That's what place cells do. So here are two different trials-- two different cells that were mapped out in a rodent doing this. This cell responds always in that corner. Another cell responds only in that corner. OK, these are just place cells like we described before doing what place cells do. But now, sometimes those place cells are off by 180 degrees, even though the stripes should resolve the ambiguity. OK, so those same cells on other trials respond to the opposite corner. So the place cells are doing just what the rodent is doing. The place cells are confused. Am I facing-- I am oriented like this, or am I oriented like that? The place cells don't know, and the rodent doesn't know. And the coolest thing about this experiment is that these things are linked. On the trials where the rodent goes to the wrong corner, the place cells are also in the wrong corner. OK, they systematically determine which way the animal will go. Oh, and also as I mentioned before, all those cells are in cahoots. They're all in sync going the same way. So when one of the cells rotates to the opposite corner, all the other ones rotate to the opposite corner. So it's as though somehow on trial to trial, the rodent thinks he's oriented in one way, he's actually 50-50 which way he's oriented. He's not using the feature cues, and his behavior according to where he looks for the food exactly follows that way he's oriented and so do all of his place cells. OK, that's that whole system goes together. That tells you that those place cells are relevant behaviorally. They are the system that either directly determines or is tightly linked to the system that determines which way the animal thinks he's facing. OK, I realize this is a little bit complicated. Does it make sense to you that as we've been talking about with reorientation, even though the animals should know from this stripe the difference between that corner in this corner, he doesn't know behaviorally. He's looking for food right there, and yet he goes 50-50. Weird and stupid, right? Place cells do the same thing. And further, the place cells and the behavior go together. Yeah, Sasha? AUDIENCE: So if you're reading information off of place cells, can you zap it? Can you [INAUDIBLE]? NANCY KANWISHER: Wouldn't that be nice? Turns out you can't for the reason someone over here asked a long time ago. You, I think. And that's because they're all interleaved together. And if you zap just one, you're not going to have-- one cell, you're not going to have an effect. And if you zap a whole region, you get all of them and you get muck. So you can't do that manipulation unfortunately. You need some kind of topography to do the manipulation. OK, so I just said how all this-- I got ahead of myself-- how it relates to behavior, but just to go through that quickly. So what we've done here is they've trained the mouse on this classic reorientation task. They disorient the mouse before each trial while recording from hippocampal place cells. As before, given cell flips 180 degrees from trial to trial, despite the fact that the stripes should disambiguate it and tell them which way he's oriented. And by the way, head direction cells and the grid cells also flip in the same way in cahoots with the place cells. But you can tell which corner the animal will go to by looking at what the place cells respond. And so when this place cell represents that location, the animal searches first there. And when it flips around, they search in the opposite corner. OK, so all of that just shows this really strong link between the place cells and behavior. So to recap, we've talked about four different kinds of cells involved in representing space and navigating around in it. Place cells that are like the "you are here," they respond when you're in a particular location. Direction cells that respond when you're heading in one direction, not in another direction. Border cells that fire when you're near a particular border in the environment. I have border cells going right now throughout this whole lecture. I've got a batch of border cells that are going. Grid cells that do this amazing thing of firing when the animal is in multiple different locations, and those locations that make it fire are arranged in a hexagonal grid. Think of it as a kind of ruler telling the rodent how far he's gone in this space, and those grid cells are like the rulers. Yeah, right. Those are the four kind we've talked about. So now, here's the cool thing. All this stuff-- navigation is awesome. We need it, it's important. All mobile animals need it for the reasons we've been talking about. But we can use this whole system for so much more than just navigation. Once you have this fancy system in your head to keep track of your location, to keep track of your direction, to keep track of where things are, how you're moving through that space, you can use that whole magnificent system in other ways. And in the last three or four years, there's just a huge number of studies that are really starting to take this very seriously, particularly the grid cells and thinking about how the grid cells-- I mean, probably the whole system, but people have been focusing on the grid cells and how they're used in multiple different situations. So here's one. OK, this is a cool study where what these guys did was they stuck a little device hanging around people's necks, the subjects' necks. Have a little camera aiming forward. It takes pictures at random intervals and records the person's GPS location. OK, so you send them off for a few months with this little device, and you do something to protect people's privacy. I don't know exactly how they maneuver that, but I'm sure they found a way. And so then they get this set of photographs taken from this person's front view of wherever they were over several months as they went wherever they went in their lives with a little GPS tag for each photograph. OK, so then what they do is they bring the subjects in and pop them in the scanner and show them some of those pictures. And they asked people to relive the experience that they had when they were looking at that thing. Put this on me, it'd be my monitor like all the time, and I wouldn't know which experience to relive, but I guess these people had richer lives let's hope. OK, so now what they do is they use multiple Voxel pattern analysis in the hippocampus while people are reliving those experiences in the scanner by looking at those images taken from their front facing cameras. And then they asked, is the pattern of response in the hippocampus, like some bunch of voxels, here is some pattern, is it more similar for events the subject remembers that were nearby in space? OK, so you do this for me, it's like yes, I occasionally go to the Stata cafeteria, and I occasionally go to the Koch Center cafeteria, and I spent a lot of time at home, and those two things are closer to each other than my home thing. Are the patterns more similar for nearby locations than for more distant locations? And they were. So this is the distance on a log scale between two patterns that result from the subject looking at two different images, and this is the similarity of the pattern in the hippocampus. Now some of you might be wondering, and in fact, I wonder this too. I think this is a cool study so I'm presenting it, but it doesn't make sense to me because everything we know about the hippocampus is those place cells are pretty interleaved. So how you manage to get a pattern response reading out a systematic location out of the hippocampus is a mystery to me. So they can't be fully-- there must be some kind of structure in there to the layout of those cells to enable them to get this information. OK, does everybody get how it's telling you that the hippocampus is remembering and reliving some representation of the locations of where you had those experiences? Everybody get how this shows us? But then they asked another interesting question, and they said, oh, does it also represent time? So we've been talking about space for the last two lectures, but now we're going straight off the deep end. And our first step, not even near the deep end yet is, does it do not just space but time? So they can take all those photographs and say, OK, how far apart in time were these two photographs taken? And they can do the same graph, and yes, they get a relationship with time as well. The farther apart in time people saw those two patterns, the more different the patterns in the hippocampus. Isn't that cool? OK, so that's one example showing that the hippocampus holds some kind of large scale representation of not just space but also time. And so there's a lot of work on how this gives structures to our memories for distances over the range of 100 meters and times between 15 hours and a month. I'm going to run out of time, so unless it's a clarification question, I'm going to keep going. Yeah, OK. AUDIENCE: Are different states confounded for time? NANCY KANWISHER: Yeah, yes. If you just did it like that-- so you have to do something to pick out time things that aren't confounded with space. You have this big sample of pictures, and you take a subset where you balance for it. Absolutely. They would have to do that. I can't actually remember, but they must have done that. Imperfect as peer review is, you'd never get through peer review if you didn't take care of that problem. OK, so that's the first thing. Here's another even more radical example. So people have shown that grid-like representations-- and I'm skipping over most of the details here to give you the gist because actually the details are a bit complicated. But they've shown that people seem to use their grid cell system when they are thinking about conceptual spaces, not just physical spaces. OK, so there's one classic experiment in which these guys taught subjects a conceptual space. They taught them about different kinds of birds, and these birds differed on two dimensions. They could vary in neck length or in leg length. And these things were orthogonal varied, so they made some artificial birds that filled up that space. And so here are some of the birds. This one has short legs, and OK, here's one with short legs and a long neck. And here's one with a longer neck and shorter-- wait, let's see. Longer legs and shorter neck right there. OK, so you've got every possible combination. They didn't show people a space like that. They just taught them things about these different birds. They had to remember their names and various facts about them. And so the idea is that when people learn about those birds, they mentally construct a 2D space. Because, in fact, those birds were generated from a 2D space, varying neck length and leg length. And so then when they scan subjects, they found essentially a neural signature of a grid system representing that 2D space. So even though the grid system presumably evolved to enable us to navigate around in a 2D space and keep track of where we are in that 2D space, it seems like it's now getting co-opted and being used for all kinds of representations of 2D spaces, including extremely abstract, artificial learned 2D spaces that you weren't even taught explicitly as a 2D space. You were just taught these birds. So that's pretty amazing. In another recent study, they had subjects do a role-playing game while in the scanner. And in the role-playing game, they're interacting with virtual characters. And those virtual characters had different kinds of social power and different affiliations with other individuals. So here's another-- the social space that was invented by the experimenters, and the subjects are playing this game, interacting with other virtual individuals who vary in social dominance and their affiliation to others. And they find place cell activity that seems to echo the position of another person in that social space. I mean, that's extremely abstract. And yet again, parts of the navigation spatial system are being co-opted to do this. I'm not giving you the details on how all this is done. I'm just telling you that studies have shown that these systems are being co-opted for other uses. Here's another very charming non-spatial use-- well, sort of spatial use-- of place cells. So those bats, turns out, are extremely social organisms. They have very sophisticated social structures, and they care a lot about each other and who's related to whom and who's doing what to whom. And it turns out that there are social place cells in bats. That is, cells in this bat's brain, if I were a bat, that would be representing your location, Jack. OK, so not the usual thing where my place cells are just saying where am I. I'm watching you, and my place cells are telling me, where are you. Something social organisms care a lot about, including bats. So they have an observer bat here hanging upside down, and he's watching this bat fly over to there and back. And then in this experiment, he subsequently flies that same path. That's how we know that he's watching that path because he has to mimic the bat's path that he just observed. But while he's watching that bat fly on that path, what you see is here's a cell right here. Here is the path flown by the bat, like out and back. And this is when the bat is flying out and back himself, and this is when the other bat is flying out and back. That blurred a little bit. Here's the place field for self and the place field for other. They're not the same. A given cell doesn't represent the same location when it's me who's there and when it's the person or bat I'm watching who's there. But they're place fields in both cases. Social place cells. I'm going to keep going because otherwise I'm going to run out of time, but I'll hang around after. OK, so this whole system is used not just for representing social status. What kind of bird this is in this abstract bird space, but actually for making decisions, for thinking. OK, so as rats run in mazes, you can record-- we've shown this already-- you can show multiple hippocampal place cells. And can you guys imagine that if we were recording from several different hippocampal cells at the same time, we could read out those cells and make a guess about where the rat is in its location. It's just like MVPA but done across neurons. So we have a pretty good sense of where the rat is. So now we have a rat navigating around in this maze, and what I'm going to show you, the white circle is where the rat actually is. And the little color thing is telling you where the simultaneous readout from several place cells in that rat's hippocampus would predict where the rat is. Like can we tell where the rat is by looking at its place cells? OK, so right here they're in the same place. It makes sense. The rat is right there, and we're reading it out. OK, so far so good. But now what we're going to do is watch what that place cell location does as the rat moves around in his environment and makes decisions about where to go next. OK, so what we're going to see is the rat is going to come up to an intersection of the maze-- I think it's right here. And he's going to decide, am I going to go this way? Am I going to go that way? And as the rat stays there deciding which way to go and the white stays there as he sits there thinking, huh, should I do this? Should I do that? You could call it neural deliberation. We will see what his place cell activity shows you. So here we go. Rat starts there-- whoops, how do I play this here? OK, so rat is heading up there, and so are his place cells. He comes up to the intersection. He stays in one place, but look what his place cells are doing. Should I go over there? I'm just interpreting what this means, but it sure looks like neural deliberation to me. And that's what he decided. Everybody get what we just saw? While he's standing there, he's in one place, but he's clearly deciding where to go next. And while he's deciding, those place cells are essentially apparently running simulations of where he might go next. OK, so we started with this big long list of things you need to know to navigate around in the world. And the neural basis of all this is really not understood yet, but I've shown you what I think are a bunch of tantalizing snippets. Which lead to the idea that our best current guess about the neural locus of these things, which is very far from the actual understanding of how they work, is that the perception of the layout of space around us, the PPA and the occipital place area are very involved in that. Also in saying for an unfamiliar place what kind of place is this. I didn't show you those data, but you can, in fact, decode whether you're looking at a scene or a beach by looking at the pattern of response and the PPA. We talked about the idea that the retrosplenial cortex may be involved in recognizing familiar locations-- that's a bit of a question mark. That the idea that your map of the world is represented in your hippocampus by way of place cells, which also say where you are in that world. That your heading direction in humans-- I didn't give you all the evidence for this, but in humans, there's quite a bit of evidence that retrosplenial cortex is very involved in heading direction. I guess I did give you evidence-- patients who have had damage there and can recognize places but not know how they're oriented there. That planning routes around boundaries in your environment involves the occipital place area and the parahippocampal place area and that this business of reorientation seems to particularly involve heading direction cells in humans, most likely in retrosplenial cortex. So you don't need to memorize all. I mean, I don't care that much about the locations. What I want you guys to understand is what are these problems that are involved in navigation and what kinds of things can we learn with different kinds of behavioral and neural measures. And you may have noticed in the last couple of lectures that I presented lots of behavioral data, because actually so far, the richest insights about how the system actually works still come-- or many of the rich ones come from behavioral data. OK, quiz is in two minutes. Does anybody want to ask me a question before the quiz? Yeah? AUDIENCE: Do you know that the [INAUDIBLE]?? NANCY KANWISHER: Yeah, so you have to do lots of controls to work that out. And I didn't show you any of the details of the data. But yeah, these guys are pretty careful, and there are all these things-- there are many different ways in which people are watching hippocampal neurons and decoding trajectories from hippocampal neurons. You may have heard about replay, which is a big thing in this department. Tonegawa and Wilson labs study this, where you have a rodent moving around in one trajectory during the day, and then you record from those neurons at night, and you see replay of the trajectories that the rodent went through in the previous day. And so there you have to be really careful to say, OK, there's a lot of data and a lot of noise, and is this really more than the noise? And it is, but it takes a lot of statistical work to show that. Yeah? AUDIENCE: [INAUDIBLE] scenario be like a place that I know. With the neurons, the same neurons would fire if I go back to that environment, I assume, or like the place? NANCY KANWISHER: Yeah, yep. AUDIENCE: But then you can't really have that, or you can't reserve neurons for those [INAUDIBLE].. NANCY KANWISHER: It's a good question. How do we have enough neurons? Yeah, especially for some place we go every six months. Are they sitting around waiting for us to go back there? No, there's some recycling of neurons across very different locations. So within that location, they'll be consistent. But yes, you do recycle. So the same neuron will have one place cell in this environment, and it may or may not have a place cell in another environment. That's a good question. |
MIT_913_The_Human_Brain_Spring_2019 | 8_Navigation_I.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: Here's the agenda for today. As usual, a bunch of announcements in red. Assignment 4 was graded. There will be comments showing up online on Stellar soon on any of you who didn't get a near-perfect score on it. And I'll also be going over a little bit of it in a moment. And then once we do that, we're going to talk about navigation-- how we know where we are, and how to get from here to someplace else, which is much more awesome than it sounds at first, as you will see. OK. So quick review. OK. So what was the key point? Why did I assign the Haxby 2001 article for you guys to read? It presents this important challenge to the functional specificity of the face area and the place area. What was that challenge? What was Haxby's key point? Yes, Isabel. AUDIENCE: Well, he was just attacked-- it just has a preference for rectilinear and not seeing if it's actual scanning for. It's not truly detecting whether it's a face or not. NANCY KANWISHER: Yeah. He wasn't worrying about rectilinearity so much back then. But his point was that we shouldn't care just about the overall magnitude of response of a region. Like, OK, it's nice if the face area responds like this to face, but isn't like that to objects. But even if it responds low and the same to cars and chairs, it might still have information to enable you to distinguish cars from chairs if the pattern of response across boxes in that region was stably different for cars and chairs. OK? That's really key. We'll go over it at a few more points. But that's essential, right? A lot of the details that I'm going to teach you that go by in class don't matter, but I really want you guys to understand the PPA. And that's the nub of it. OK? So the idea is that selective-- his claim is that selective regions, like the face area, contain information about non-preferred stimuli. That is, like, non-faces for the face area, or non-places for the place area. And because they contain information, those regions don't care only about their preferred category. So why does Kanwisher get off saying the FFA is only about faces and the PPA is only about places if we can see information about other things in those regions? OK. That's a really important critique. That's why we're spending time on it. OK? OK. Next, what kind of empirical data might be an answer to Haxby's challenge? I presented at least three different kinds of data that can address this and say, hey, wait a minute. You know, you have a point, but what kind of data could speak to that and respond to Haxby? We didn't actually talk about this explicitly in class, but think about it. Here's the claim he makes. What might we say, right? So that's empirically true. Like, you look in the FFA. Even in my own data, I can distinguish chairs from shoes a little teeny bit in the FFA. OK? So that empirical claim is true. Why might it nonetheless be the case that the face area is really only about face recognition? What other data have you heard in here that might make you think that? Yes, Ben. AUDIENCE: Because it's the presence-- NANCY KANWISHER: Speak up. AUDIENCE: Sorry. The presence of low-level stimuli that are generally in faces, but can also be sparse on chairs and cars in context. NANCY KANWISHER: Absolutely. So yeah, put another way, even if you had a perfect coder for faces-- like take your best deep net for face recognition, VGG face-- it can distinguish chairs and shoes too, right? The features that you use to represent faces will slightly discriminate between other non-face objects. So the fact that we can see that information in itself isn't strong evidence that that region isn't selective for face perception. Absolutely. What else? Yeah, OK. AUDIENCE: Like transcranial magnetic stimulation? When you stimulate the epithelial and you look at it face it affects it, but when you are like looking at other objects, the effect is no longer. NANCY KANWISHER: Exactly. And so what does that tell you about-- OK, so there's pattern information in there about other things beyond faces. But? Apparently it's not used, right? Now with every bit of evidence, you can always argue back. People would say, well, TMS, those effects are tiny. Maybe there isn't and we didn't have power to detect it, blah, blah, blah, blah. But at least, absolutely you're right. TMS argues against them. What else? Or at least is a way to argue against it-- and the Pitcher paper that I assigned and other papers that we've talked about in here provide some evidence that actually, at least the occipital face area really is only causally involved in face perception even if there's information in there about other things. What else? What other methods can address this? Yeah. AUDIENCE: That is pretty new simulation, and even when you're pressing hand against your face, you can perceive faces in it. NANCY KANWISHER: Exactly. Exactly. So these are both causal tests, right? OK, there's information in there. But is it causally used in behavior? TMS suggests not. The little bit of direct intracranial stimulation data that I showed you also suggests the causal effects when you stimulate that region are specific to face perception, again suggesting that even if there's pattern information in there, it's not doing anything important because we can mess it up and nothing happens to the perception of things that aren't faces. Absolutely. What else? We talked about it very briefly a few weeks ago. Yeah. AUDIENCE: So if you remove the [INAUDIBLE],, it just completely makes a person incapable of perceiving faces. That is causing-- NANCY KANWISHER: Yes, but the crucial way-- yes, but the crucial way to address Haxby would be what further aspect of that? Yes. And by the way, we don't remove the area in humans, but occasionally, we find a human who had a lesion there due to a stroke and then we study them. AUDIENCE: So they're still able to do other categories. NANCY KANWISHER: Exactly. Exactly. So all three lines of evidence from studies of prosopagnosia, electrical stimulation directly on the brain, and TMS, all can provide evidence to various degrees. Again, one can quibble about each of these particular studies. But all of those suggests that even though there's information in the pattern, Haxby's right-- there's information in there about other things that aren't faces. The only causal effects when you mess up with that region are on faces, not on other things. That suggests that pattern information is what they sometimes say in philosophical circles is "epiphenomenal." That is, it's just not related to behavior and perception. Make sense? OK, moving along, how can we then use Haxby's method to not just engage in this little fight about the FFA and how specific it is, but to harness this method and ask other interesting questions from functional MRI data. How can we use it to find out, for example, does the place area discriminate, say, beach scenes from city scenes? We want to know what's represented in there. How could we use this method to find out? Yes, Jimmy. AUDIENCE: If I do what Haxby kind of did, and try the decoder, and see if the decoder could decide and differentiate between the city and or like an acre of shade. NANCY KANWISHER: Exactly. Exactly. So we talked about decoding methods last time as a way to use machine learning to look at the pattern of response in a region of the brain, and train the decoder so it knows what the response looks like during viewing of beach scenes, train it so it knows what the response in that region looks like when you're looking at city scenes, and then take a new pattern, and say, is this more like the beach pattern or is it more like the city pattern? And that's how you could decode from that region. Yes. AUDIENCE: That doesn't tell as much, in the sense that it's not telling you-- I mean, we know that there is residue of information nevertheless, and that this community can be varied on any region considered at any time, always. NANCY KANWISHER: We have a true nihilist here. No, it's a good question. It's not the case that you can discriminate anything based on any region of the brain. So there are some constraints. There are some things you can find in some places and other things you can find in other places. And they're not uniformly distributed over the brain. However, the fact we just-- the point I just made about yes, there's discriminative information in the face area about non-faces but maybe it's not used, should raise a huge caveat about this whole method. How do we ever know? We see some discriminative information. How do we know whether it's actually used by the brain, part of the brain's own code for information, or just epiphenomenal garbage that's a byproduct of something else? It's a really important question about all of pattern analysis. We do it anyway because we're beggars. We can't be choosers in terms of methods with human cognitive neuroscience. And we want to know desperately what's represented in each region. So we do this. But whenever you see these lovely, "I can decode x from y," things, you should always be wondering. Who knows if that fact that you, the scientist, can decode it from that region means the brain itself is reading that information out of that region? Big, important question. All right, put another way-- so Jimmy mentioned just decoding in general and that's absolutely right. But to directly harness the Haxby version of this, what would we do? First, we would functionally localize the PPA by scanning them, looking at scenes and objects, find that region in each subject. Then we would collect the pattern of response across voxels in the PPA while subjects were looking at, say, beach scenes. And so if this is the PPA, this is the pattern of response across voxels in that region when they're looking at beach scenes-- fake data, obviously, just to give you the idea. So we would split the data in half, even runs, odd runs. That would be like even runs. Then we get another pattern for odd runs. And then we get another pattern for when they're looking at city scenes with even runs, and another pattern when they're looking at city scenes in odd runs. So then, once we have those four patterns, what is the key prediction if using Haxby's correlation method? What is the key prediction if the PPA, if pattern of response in the PPA, can discriminate beach scenes from city scenes? What should we see from these patterns? What's the key prediction? Claire. Key prediction-- you have these four patterns in the PPA, and now you want to know is there information in there that enables you to discriminate beach scenes from city scenes? AUDIENCE: Is that like beach even and beach odd are more similar than beach even and city even? NANCY KANWISHER: Exactly. Exactly. Right. It sounds all complicated and it's easy to get confused. But the nub of the idea is really simple. It just says, look, the beach patterns are stable. We do beach a few times, we get the same pattern, more or less. We do city, we get a different pattern. And we keep doing city, we get the same pattern more or less. And the beach pattern and the city pattern are different. So that's the nub of the idea. And so you can implement it with decoding methods or the Haxby versions, just to ask whether the correlation between two beach patterns-- beach even, beach odd-- is more similar than the pattern between one of the beaches and one of the cities. Just asking, are they stably similar within a category and stably different from another category? Does that make sense? This is just a variant of this thing I showed you guys before. We just harnessed this to ask whether that region can discriminate. OK, and I just said all of this. If you still feel shaky on this, there's a few things you can do. A version of my little lecture on this method is here at my website. You can look at that. It's just like six minutes and it's basically what I did before. But if you want to go over it again, there it is. You can reread the Haxby paper, which I know is not super easy, but it's actually nicely written. And if you read it carefully, it explains the method pretty clearly. You can talk to me or a TA. And we'll get back to this question of whether we should do a whole MATLAB based problem set on this. OK, let's move on and talk about navigation. This is a Monarch butterfly. It weighs about half a gram. And yet, each fall the Monarch migrates over 2,000 miles from the USA and Canada down to Mexico. In fact, a single Monarch flies 50 miles in a single day. It's pretty amazing for this tiny, little, beautiful delicate thing. Even more amazing-- it flies to a very specific forest in Mexico that's just a few acres in size. And it arrives at that particular forest. Now, that's already amazing, but here's the part that is just totally mind blowing and that is-- and it flies back north in the spring-- and that is that this whole cycle takes four generations to complete. And that means that the Monarch that starts up in Canada and flies down to that forest in Mexico-- one Monarch does that-- is the great-great-grandkid of his ancestor that last went on that route. Put that in your head and smoke it. That's pretty amazing. Consider the female loggerhead turtle. She hatches at a beach, and goes out in the sea, and swims around in the sea for 20 years before she comes back 20 years later for the first time to the beach that she hatched at. Now, it's pretty amazing, but some mothers miss by 20 miles. They go to the wrong island or the wrong beach on the same island. And so you might think, OK, it's pretty good. It's not amazing. But here's the thing-- the wrong beach that those mothers go to is the exactly right beach had the Earth's magnetic field not shifted slightly over those 20 years. They're exactly precise, but they just don't compensate for the shift in the Earth's magnetic field. Here's a bat. This bat maintains its sense of direction even while it flies 30 to 50 miles in a single night in the dark catching food. And it maintains its sense of direction even though it's flying around in all different orientations in three dimensions, and even as it flips over and lands to perch on the surface of a cave. It doesn't get confused by being upside down. This is Cataglyphis, the Tunisian desert ant. These guys are amazing. They crawl around on the surface of the Tunisian desert where it's 140 degrees in the daytime, and they have to crawl around up there to forage for food. And then because it's so damn hot, as soon as they find food, they zoom back to their nest and go down in the nest where it's cooler. So here is a track of Cataglyphis starting at point A and foraging. He's meandering around looking for food going along this whole crazy path to point B. And then if he finds food at point B, boom-- straight line back exactly to the nest. Now we might ask, how does Cataglyphis keep track as he's doing all this stuff of where his heading is back to his nest? The first thing you might think of is things like what it looks like. Maybe there are landmarks, maybe there are odors. But no, he doesn't use any of those things. And we know that because when scientists who have set up this measurement device capture Cataglyphis after he goes out on this tortuous path and finds the feeding station, they capture him and move them across the desert-- on which they've drawn all these grid lines for the convenience of their experiment-- and they release them here. And what does Cataglyphis do? He goes on the exactly correct vector-- no landmarks, no relevant odors, and yet he's obviously encoded the exact vector of how to get home. Think about what that entails and what's involved. AUDIENCE: The same vector with respect to north? NANCY KANWISHER: With respect to-- yes, well, with respect to absolute external direction, absolutely. So that's what I just said. So these feats of animal navigation are amazing. And animals have evolved ways to solve all these problems unique to their environment. They've evolved these abilities because they really have to be able to find food, and mates, and shelter. And this is not just esoterica in the natural world. MIT students, too, need to be able to find food, and mates, and shelter. So what is navigation, anyway? And what does it entail? Well, I'll argue over the next two lectures that there are two fundamental questions that organisms need to solve to be able to navigate. First one is, where am I? And the second one is, how do I get from here to there, A to B, wherever there is that you need to get? So we'll unpack this. There are many different facets of each. But so for example, if you see this image, you immediately know where you are, and you also know where to go if, for example, it starts raining. You might rush into lobby 7, or if you're hungry, you might turn around and go back to the Student Center. Same deal here-- if you see this, then you know where you are and where you would go to get to various things. Now, these judgments rely on the specific knowledge you guys have of those particular places. You recognize that exact place, and you have some kind of map in your head that we'll talk more about in a moment, that tells you where everything else is with respect to it. But even if you're in a place you don't know at all you can still extract some information. So suppose you miraculously found yourself-- boom-- here. I wouldn't mind, actually, but that's not in the cards for a while. So you're here. Even if you've just hiked around the corner, if you've never seen this place before, you have some kind of idea of what sort of place this is. Where would you pitch your tent? Where might you try to go to get out of this valley? If it was me, I wouldn't. I have friends who would go straight up there and try to drag me along, complaining. If it was me, I'd rather look for some other route. But you can tell all of that just by looking at this image-- where you can go from there, not just what kind of a place it is, but what are the possible routes you might take. So these fundamental problems that we solve in navigation, of knowing where am I and how do I get from here to there, include multiple components. In terms of where am I, the first piece is recognizing a specific place you know. So you might open your eyes and say, OK, this is my living room. I know this particular place. But as I just pointed out, even if the place is unfamiliar, we can get a sense of what kind of place this is. Am I in an urban environment, a natural environment, a living room, a bathroom? Where am I? A third aspect of where am I, a third way that we might answer that question, is something about the geometry of the environment we're in. So try this right now-- close your eyes. OK, now think about how far the wall is in front of you. Don't open your eyes, just think about how far away it is, how far away the left wall is and the right wall is. And how about the wall behind you? Don't open your eyes. How far back is the wall behind you from where you are right now? OK, you can open your eyes. It's not rocket science. I just wanted you to intuit that even though you're presumably riveted by this lecture, and thinking only about navigation, you sort have a kind of situational awareness of the spatial layout of the space you're in. So you might have a sense of I'm in a space like this and I'm over here in it. And we'll talk more about that exact kind of awareness of your position relative to the spatial layout of your immediate environment. It's something that's very important in navigation. And another part of that is you might think, how would I get out of here? If I'm seriously bored by the lecture or for any other reason I urgently need to get out of here, you probably know exactly where the doors are in this space. It's just part one of those things that we keep track of. So those are aspects of where am I in this place. What are the things we need to know to know how we would get from here to someplace else? Well, the simplest way to navigate to another location another goal is called "beaconing." And this is a case where you can directly see or hear your target location. So you're sailing in the fog. You can't see a damn thing, but you hear the foghorn over there, and you know you're sailing to that point. So you just go toward the sound, nice and simple. You don't need any broader map of anything else. You just hear it and head toward it. Or if you see this, and your goal is to get to the green building, well, there's a green building and you just head that way. Now, you're going to have to go around a little bit to get around those obstacles, but you know where to head because you can see your target directly. These are cases where you don't need a broader, long-term knowledge of the whole environment. If you can see your target, you just go straight for it. So that's beaconing, simplest kind of A to B. And it requires no mental map, no kind of internal model of the whole world you're navigating in. But if you can't see the place you want to go, then you need some kind of mental map of the world. So what do we mean by a "mental map of the world?" Well, this idea was first articulated in a classic experiment way back in the 1940s. So this was actually one of the original experiments that launched the cognitive revolution, when we emerged from the scourge of behaviorism to realize that it was actually OK, and indeed, of the essence, to talk about what's going on in the mind. And a really influential study that launched the cognitive revolution by Tolman was done on rats. And it went like this-- he trained rats. He put them down in this area, and they had to learn that there would be food out there at the goal. And so they just have to make the series of left and right turns to find the food. So you train them on that for a while till they're really good at it. And then he put the rats in this environment. Now, the environment is similar, except there's multiple paths, one that seems analogous to the old route. So what are the rats do in this situation? They run down here, they run into a wall, and they realize, OK, that's not going to work. No surprises yet. But then, the rats immediately come back out and they go straight out that way. What does that tell you? What did they learn? Did they learn a series of go straight, and then left, and then right, and then right, and then go for a long ways? No. That wouldn't work over here. They learned something much more interesting. Even though they were only being trained on this task here, they learned some much more interesting thing about the kind of vector average of all of those turns. Everybody get this? It's really simple but really deep. So from this, Tolman and others started talking about cognitive maps, whatever it is you have to have learned in a situation like this so you can abstract the general direction. We don't just learn specific routes as a series of stimulus and responses. So there must be some kind of map in your head to be able to do this, and rats have that, and so do you. So let's consider this question right now. Where am I? Where are you? To answer that question to yourself, there's something like this in your head. And it probably doesn't look exactly like that in your head, but there's some version of this information that's in your head that you're using when you answer the question of where you are. And you have some way to say in that map of the world, I know not just what the MIT campus looks like and how it's arranged, but I know where I am in it. Now, if you want to know how to get somewhere else-- like suppose you're hungry and you want to go over to the Stata Cafeteria over there. What else do you need to know besides knowledge of the map of your environment and where you are in it? What else do you need to know? You know you have this map, you know where you are, and you know where your goal is. Now you have to plan how to get over there. What else do you need to know? Yeah. AUDIENCE: You have to know which parts are paths and which parts are buildings. NANCY KANWISHER: Yes, exactly-- where can you go in there? Actually, where can you physically get through? Actually, our vector is right over there, but you can't go that way because you can't go through that glass, even though you can see through it. So knowledge of physical barriers, and what's an actual path and what isn't is crucial. What else do you need to know? Suppose we had a robot in this room, sitting right here facing the front of a room like you guys, and we're programming the robot on how to get over there. What are other things we'd have to tell that robot to get it to plan how to get over to the Stata Cafeteria? Yeah. AUDIENCE: Things to watch out for, like cars and traffic. NANCY KANWISHER: Absolutely. We'd have to know about obstacles, like moving obstacles, not just fixed ones. Absolutely. What else? Yeah. AUDIENCE: Initial orientation. NANCY KANWISHER: Yes. He has to know which way he's headed. You're going to give this robot instructions on which way to go. It matters a whole lot if the robot is starting like this or starting like that. The instructions are different in the two cases, and likewise for you guys. To plan a route, you need to know which way you're heading. If you guys have ever been in Manhattan, and you come up from the subway, and you see the street's going like this, and you know it's north/south, and you don't know if you're heading south or north-- really common thing. It's not enough to know I'm at the junction of Fifth and 22nd. You need to know, am I facing south or north? Otherwise you can't figure out which way to go. That's called "heading direction." We just did all that. You need to know your current heading. You also need to know the direction of your goal in order to plan a route to it. So in this kind of taxonomy of all the things you need to know to navigate, we've just added that if you're going to navigate in your own environment, you need to know not just where you are in it, but which way you are facing in that mental map. And we also talked about this business of what routes are possible from here, how do we move around obstacles, where are the doors, where are the hazards like cars, et cetera. A final thing you need to know is that even if you have a good system for all of these other bits, it's still possible to get lost in all kinds of ways. You lose track, you get confused, you get lost. So we also need a way to reorient ourselves when we're lost. And we'll talk a lot about that in the next lecture. So this is just common sense. We're doing a kind of low-tech version of Marr computational theory for navigation. What are the things that we would need to know or that a robot would need to know to be able to navigate? Just thinking about the nature of the problem. So that's what we need. What's the neural basis of all of this? So I'm going to start right in with the parahippocampal place area, not to imply it is the total neural basis of this whole thing. It's just one little piece of a much bigger puzzle. But we'll start in there because it's nice and concrete. All right, so this story starts about 20 years ago. I think I mentioned some of this in the first class when I talked about the story of Bob and I talked about Russell Epstein, who was then my post-doc. And he was doing nice behavioral experiments, and thought it was trashy and cheap to mess around with brain imaging. And he was going to have none of it until I said, Russell, just do one experiment. Scan subjects looking at scenes. I know it's kind of stupid, but just do it. Then you'll have a slide for your job talk. And he scanned subjects looking at scenes and looking at objects. And here is one of those early subjects, probably me-- I don't remember-- with a bunch of vertical slices through the brain, near the back of the brain down there, moving forward as we go up to here. Everybody oriented? Sorry, it's not showing up very well in this lighting, but there's a little bilateral region right in the middle there that shows a stronger response when people look at pictures of scenes than when they look at pictures of objects. So we hadn't predicted this. Yeah. AUDIENCE: Is the pink the eye color? NANCY KANWISHER: Yeah. Yeah, pink is-- all the colors are-- there's significance maps or P levels, right. So pink is higher than blue, but blue is borderline significant. So this is kind of dopey. We didn't actually predict it for any deep reason. We hadn't been thinking about theories of navigation or anything like that. It was just one of those dumb experiments where we found something and we followed the data. So we found this, and it's, like, OK, let's try some other subjects. So here are the first nine subjects we scanned. Every single subject had that kind of signature response in exactly the same place, in a part of the brain called "parahippocampal cortex." So this is very systematic. And there's lots of ways to make progress in science. One way is to have a big theory, and use it to motivate brilliant, elegantly designed experiments. And another is you just see something salient and robust that you didn't predict, and you follow your nose, and try to figure it out. So that's what we did in this case. It's like, OK, what the hell is that? So if you think about-- we eventually called it the "parahippocampal place area" after a little more work. If you think about what we have so far, we've scanned people looking at pictures like this and pictures like that. And what we've shown is that little patch of brain responds a bunch more to these than those. So my first question is, is that a minimal pair? Tally, is that a minimal pair? AUDIENCE: Sorry, I'm about my voice. NANCY KANWISHER: Sorry. Simple, simple. We're contrasting this with that. AUDIENCE: Can you remind me what a minimal pair is? NANCY KANWISHER: OK, minimal pair is this thing we aspire towards an experimental design, where we have two conditions that are identical except for one little thing we're manipulating. AUDIENCE: I don't really think it's a minimal pair, but I'm not really sure. NANCY KANWISHER: Well, I even told you what we were designing to manipulate, but-- AUDIENCE: There seems to be too many differences between a living room and-- NANCY KANWISHER: It's ludicrous. I mean, it's a million differences here. So we don't know that we have anything yet. There's all kinds of uninteresting accounts of this systematic activation in that part of the brain. So just to list a few that you've probably already noticed-- these things have rich, high-level meaning and complexity. So you can think about living rooms, or where you might sit, or somebody's aesthetic, home design, or there's all kinds of stuff to think about there, much more than just, OK, it's a blender. So there's just complexity in every possible way. There are also lots of objects present here, and only a single object over there. So maybe that region just represents objects, and if you have more objects, you get a higher signal. There's another possibility, and that is that these images depict spatial layout, and that one does not. So you have some sense of the walls, and the floor, and the layout of the local environment here that you don't have over there. And we could probably list a million other things It's a very, very sloppy contrast. So how are we going to ask which of these things might be driving the response of that region? Well, a natural thing to do is just deconstruct the stimuli. So here's what we did-- this is actually way back 20 years ago. There were better methods at the time, but I didn't know them, so I actually drove around Cambridge, photographed my friends' apartments, left the camera on the same tripod, moved all the furniture out of the way, and photographed the space again. Ha, ha. I know. And then these will be probably cut out with some horrific version of Adobe Photoshop that existed 20 years ago. Anyway, we deconstructed the scenes into their component objects and the bare spatial layout. Everybody get the logic here? Just to try to make a big cut in this hypothesis space of what might be driving that region. So what do we predict that the PPA will-- how strongly will it respond? Oops, how strongly will it respond if these two things are true? If it's the complexity or multiplicity of objects that's driving it, what do you predict we will see over there? We already know you get a high response here. What will we get over there? Yeah. AUDIENCE: Probably get more biases to the furniture. NANCY KANWISHER: Yeah, we'll respond more to this than that. Right. It's really simple-minded. If instead, it responds more to the spatial layout, what do we predict? Isabel. AUDIENCE: It's going to respond to the empty rooms more. NANCY KANWISHER: Yeah. And that seems like a weird hypothesis because these are really boring, this kind of nothing going on here. And there's just lots of stuff going on here. I mean, it's not riveting, but it's a whole bunch, whole lot more interesting to look at these than those. Believe me, I got scanned for hours and hours looking at these things. And whenever the empty rooms came on, I was, like, oh, my god, I'm just so bored. There's just nothing here, whereas here at least there's stuff. But that's not what the PPA thinks. What the PPA does-- oops, we just did the localizer-- it responds like this. This is percent signal change, a measure of magnitude of response, to the full scenes, way down, less than half the response to all those objects, and almost the same response as the original scene when all you have is a bare spatial layout. Pretty surprising, isn't it? We were blown away. We were, like, what? What? But can you see how even this really simple-minded experiment enables us to just pretty much rule out that whole space of hypotheses? It's not about the richness, or interest, or multiplicity of objects. It's something much more like spatial layout because that's kind of all there is in those empty rooms. I mean, it could be something like the texture of wood floors or something weird like that. But one's first guess is it's something about spatial layout. Does this make sense? It's just a way to take a big, sloppy contrast, and try to formulate initial hypotheses, and knock out a whole big space of hypotheses. Yes. Is it Alana? AUDIENCE: Yeah, I'm sorry, I might have missed the design. So people who are looking at the empty room would not have the furniture? NANCY KANWISHER: Good question. I skipped over all of that. We did-- yes, that's true. We did mush them all together and one could worry about that, that when you see this, you remember that that's a version of this. Absolutely. Absolutely. And so maybe-- yes, nonetheless, if what you were doing-- that's absolutely true, but if what you were doing here is kind of mentally recalling this, then why couldn't you also do that here? Maybe you could. You might argue that this is more evocative of that than this is, but it's also got lots of relevant information. Yeah, Jimmy. AUDIENCE: For the furniture, did you guys try placing them in the exact position as the scene and seeing if that-- NANCY KANWISHER: We did both versions for exactly the reasons you guys are pointing out. And it didn't make a difference. Yeah. Sorry, Cooley. AUDIENCE: It'd be-- you would transfer if they were just responding to the things, like more stuff? Like in the empty room, there's more background, but there's still more background. NANCY KANWISHER: Totally. You're absolutely right. This is taking us pretty far, but it's still pretty sloppy. This stuff goes all the way up to the edge of the frame, and here there's lots of empty space. Is that what you're getting at? Absolutely. I took out those slides because I felt I didn't want to spend the entire lecture doing millions of controlled conditions on the PPA. I thought you'd get bored. But actually, another version that we did was we then took all of these conditions, and we chopped them into little bits and rearranged the bits, so that you have much more coverage of stuff in the chopped-up scenes than the chopped-up objects. And in the chopped-up versions, it doesn't respond differently at all. So it's not the amount of total spatial coverage. It's the actual-- something more like the depiction of space. Was there a question over there? Yeah. AUDIENCE: I was wondering if there would be any difference between looking at images as 2D or 3D scene, and actually being there to see the 3D inside of the scene. NANCY KANWISHER: Totally. Totally. It's a real challenge. With navigation, navigation is very much about being there and moving around in the space. And this is just a pretty rudimentary thing where you're lying in the scanner, and these images are just flashing, flashing on, and you're doing some simple task, like pressing a button when consecutive images are identical. It's not moving around in the real world. You don't think you're actually there. But here's where video games and VR come in because actually, they produce a pretty powerful simulation of knowing your environment, feeling you're in a place in it. And so lots of studies have used those methods to give something closer to the actual experience of navigation. So where are we so far? We've said the PPA seems to be involved in recognizing a particular scene. So this just says it responds to scenes and something about spatial layout, maybe. Does it care about that particular scene or do you have to recognize that particular scene to be able to use the information? Now, our subjects mostly didn't know those particular scenes. But we wanted to do a tighter contrast asking if knowledge of the particular scene matters. So what we did was we took a bunch of pictures around the MIT campus, and we took a bunch of pictures around the Tufts campus. And we scanned MIT students looking at MIT pictures versus Tufts pictures. And then what else do we do? AUDIENCE: Get the Tufts students. NANCY KANWISHER: Yeah, why? AUDIENCE: Oh, just to make sure that it's not all about that weird architecture of the set. NANCY KANWISHER: Exactly. Exactly. So this is called-- yes, whose weird architecture? I think ours is weirder. So it's not just about the particular scenes or the particular subjects. So everybody get how with that counterbalanced design, you can really pull out the essence of familiarity itself, unconfounded from the particular images? So when we did that, we found a very similar response magnitude in the PPA for the Tufts students, for the familiar and unfamiliar scenes. Really didn't make much difference. Yeah. AUDIENCE: Taking a step back, so we started off with the one question of navigation and it involving all these different components. I just want to place this-- NANCY KANWISHER: We're getting there. We're getting there. There won't be like a perfect answer. We're not going to end up with that slide, with the exact brain region of each of those things. We'll get some gisty, vague senses of what this is. OK, so this tells us it's not about-- whatever the PPA is responding to in a scene, it's not something that hinges on knowing that exact scene. So it can't be something like, if I was here and I wanted to get coffee, what would my route from this location be, given my knowledge of the environment. Because otherwise, we wouldn't get this result. So whatever it is, it's something more immediate and perceptual to do with just seeing this place. So where are we? We've said that there's this region that responds more to scenes than objects, that when all the objects are removed from the scenes, the response barely drops. And its response is pretty much the same for familiar and unfamiliar scenes. So all of that suggests that it's involved in something like perceiving the shape of space around you. Doesn't nail it yet, but it kind of pushes you towards that hypothesis. Yeah, was there a question here a second ago? No? OK. AUDIENCE: I was talking about experiment, but is it accurate when you look at a map? NANCY KANWISHER: Oh, great question. Not very much. Yeah, if you take pictures of places from above versus this kind of view, you get a response in this kind of view, but not above. Yeah, very telling. OK, so I'm going to skip. We're not going to do the 30 other experiments. We're going to skip to the general picture, that here's the PPA in four subjects in this very stereotyped location. And here are some of the many conditions we've tested. It's not just abstract maps like this. They don't produce a strong response. Oh, this is an answer to Cooley's question way back. Here's the scrambled-up scene-- much lower response. So it's not just coverage of visual junk. And it responds pretty strongly to scenes made out of LEGOs compared to objects made out of LEGOs, and various other silly things. So all of that seems to suggest that it's processing something like the shape or geometry of space around you-- visible space in your immediate environment. Nonetheless, there's always pushback. And there's pushback on multiple fronts, and there should be. That's proper science. So one of the lines of pushback was this paper by Nasr, et al. that I didn't assign. I assigned you the response to it. Anyway, what Nasr et al. Did was scan people looking at rectilinear things like cubes and pyramids versus curvilinear, round-y things like cones, and spheres. And what they showed is the PPA responds more to the rectilinear than the curvilinear shapes. OK, that's the first thing. And so then, they argue that in general, scenes have more rectilinear structure than curvilinear structure. And they did a bunch of math to make that case. And so they argue that maybe the apparent scene selectivity of the PPA is due to a what of scenes with rectilinearity? Yeah. AUDIENCE: Confound. NANCY KANWISHER: Yes, exactly, a confound. This is exactly what a confound is-- something else that covaries with the manipulation you care about that gives you an alternative account, namely it's not scene selectivity. It's just rectilinearity. I mean, that might be interesting to other people, but it would make it not very relevant to navigation and much less interesting to me, at least. So that's an important criticism. And so then the Bryan et al. Paper that you guys read starts from there and says, let's take that seriously. Let's find out. And so you guys should have read all of this, but just to remind you, they have a nice, little 2 by 2 design-- remember we talked about 2 by 2 designs-- where they manipulate whether the image has a lot of rectilinear structure or less rectilinear structure, and whether the image is a place or a face. And what they find in the PPA is the same response to these. And it's higher to the scenes than the faces, and rectilinearity didn't matter for the scenes. So evidently, even though it does matter with these abstract shapes, in actual scenes and faces, it doesn't seem to be doing much. It's not accounting for this difference. Everybody get that? OK, let's talk about this graph. Are there main effects or interactions here? And what are those main effects or interactions? Yes, Cooley. AUDIENCE: There's many different scenes. NANCY KANWISHER: Yeah, of category, scene versus face. Anything else? AUDIENCE: What's the first one? What was the first thing? In PPA category, what's the subtype? NANCY KANWISHER: Oh, wait, this here-- these are scenes and those are faces. I'm sorry, and this is the code here. These are rectilinear versus curvilinear. Just one main effect, or is there an interaction, or another main effect? Just one main effect. These guys are higher than those guys. That's it. So that just tells you there's nothing else going on in these data other than scene selectivity. Rectilinearity doesn't interact with or modify scene selectivity, and it doesn't have a separate effect. Nonetheless, as we've been arguing with all the whole Haxby rigmarole, does the fact that there's no main effect of rectal linearity in here mean that the PPA doesn't have information about rectilinearity? No, Josh, why? AUDIENCE: This little, tiny moment that could be-- you know, this is not the right experiment to-- NANCY KANWISHER: That's right. This is a big-- well, it's the right experiment, but not the right analysis. It's the big, average responses are the same, but maybe the patterns are different. That wouldn't directly engage with this, but we wanted to know, was there information in there about rectilinearity. So how would we find out? So this was your assignment, and I think most people got it right. But in case anybody missed it, we were zooming in on this Figure 4 here. So again, this is just the same basic design of experiment two. And now, let's consider what's going on here. So you guys read the paper and you understood what was going on here. What's represented in that cell right there? What is the point of this diagram? What are they doing here? And what does that cell mean in that matrix? You can't understand the paper without knowing that. Is it Ali? No, sorry. What's your name? AUDIENCE: Sheldon. NANCY KANWISHER: Sheldon, I've only asked you six times. Yeah, go ahead. AUDIENCE: So they want to see whether the activation patterns can better discriminate between rectilinearity of the same category of things or between categories of things with the same rectilinearity. So the first thing I said is to the left and the second one is to the right. And they-- NANCY KANWISHER: Sorry, wait, here and here? No. AUDIENCE: Right side. Yeah, so that part is discriminating between rectilinearity, and that side is discriminating between categories. And they take the differences of-- well, not the differences, they take how well it can distinguish between each of those categories and plot them down there. NANCY KANWISHER: Right, OK. That's exactly right. So this is how well it can discriminate plotted down here, but based on an analysis that follows this scheme. So what does that cell in there represent, that dark green cell? What is the number that's going to be calculated from the data corresponding to that cell? AUDIENCE: Similar piece of same rectilinearity and same pattern. NANCY KANWISHER: Exactly. Exactly. So just as if you want to distinguish chairs from cars or something else, if you want to know is there information about rectilinearity in there, you take these two cases which are the same in rectilinearity-- both high rectilinear, both low rectilinear for run one and run two-- and that's the correlation between run one and run two for those cells. That's the within rectilinearity case. And if there's information about rectilinearity, the prediction is those within correlations are higher than the between correlations, just as we argued a bit back with beaches and cities and everything else-- same argument. This is just presenting the data in terms of run one and run two, and which cells do we grab to do this computation. So each of the cells in there-- for each of the cells, we're going to calculate an r value of how similar those patterns are. A pattern for rectilinear scenes in run two, a pattern for rectilinear scenes in run one-- this cell is a correlation between those two patterns. How stable is that pattern across repeated measures? All right, so that's what that r value is. The two darker blue squares here are the r values for stimuli that differ in rectilinearity. And remember that the essence of the Haxby-style pattern analysis is to see if the within correlations are higher than the between correlations. In this case, the within correlations are within rectilinearity versus between rectilinearity. And so then they calculate all those correlation differences and they plot them as discrimination abilities. And so what this is showing us here is that actually, the PPA doesn't have any information in its pattern of response about the rectilinearity of the scene. However, if we take the same data, and now choose within category versus between category, ignoring rectilinearity, and we get the same kind of selectivity correlation difference within versus between for category, there's heaps of information about category. Does that make sense? Again, if you're fuzzy about this, look back on that slide. I have lots of suggestions for how to unfuzzy yourself on it. So interim summary-- PPA responds more to scenes than objects. It seems to like spatial layout in particular. It does respond more to boxes than circles, but that rectilinearity bias can't account for scene selectivity. That's all very nice, but what is a whole other kind of fundamental question we haven't yet asked about the PPA? So we've been messing around with functional MRI, measuring magnitudes of response, trying to test these kind of vague or general hypotheses about what it might be responding to. Yes. AUDIENCE: Causation. NANCY KANWISHER: Yes, what particular causation? AUDIENCE: I guess like how the scenes, like with how the PPA with what role it plays in the person being seen. NANCY KANWISHER: Exactly. Exactly. Again, we can test the causal role of a stimulus on the PPA, all of the stuff I talked about did that. Manipulate the stimulus, find different PPA responses. But what we haven't done yet is ask, what is the causal relationship, if any, between activity and the PPA and perception of scenes or navigation? So far, this is all just suggestive. We have no causal evidence for its role in navigation or perception. All right, so let's get some. I'll show you a few examples. So one, as, you guys have learned by now is these rare cases where there's direct electrical stimulation of a region, and there's one patient in whom this is reported. This patient again, is being mapped out before neurosurgery. They did functional MRI in the patient first. This is his functional MRI response to, I think, houses versus objects. Houses are not as strong an activator as scenes for the PPA, but they're pretty good. PPA responds much more to houses than other objects. And so that's a nice activation map showing the PPA. And those little circles are where the electrodes are, little, black circles. So they know they're in the PPA because they did functional MRI first to localize that region. Now those electrodes are sitting there. And so first thing we do is record-- or first thing they did-- is record responses. They flash up a bunch of different kind of images, and they measure the response in those electrodes. And so what you see is in those electrodes right over there, 1, 2, 3, that correspond to the PPA, you see a higher response to house images than to any of the other images. And you see the time course here over a few seconds. Everybody clear? This is not causal evidence yet. It's just amazing, direct intracranial recordings from the PPA-- I think the only time this was ever done, because it's pretty rare to have the electrodes right there in a patient who's willing to look at your silly pictures, and all of that. But now, what happens when they stimulate there? So let's look at what happens when they stimulate on these sites 4 and 3 that are off to the side of the scene selectivity. And this is just a dialogue. We don't have a video, unfortunately. The videos are more fun, but this is just a dialogue between the neurologist and the patient. And the neurologist electrically stimulates that region and says, did you see anything there? Patient says, I don't know. I started feeling something. I don't know, it's probably just me. No, it's not you. And then they stimulate again. Anything there? No. Anything here? No. So that's right next to the side of the scene selective electrodes, right next door, a few millimeters away. Then, they move their stimulator over here. They don't move anything, they just control where they're going to stimulate. Patient, of course, has no idea. Neurologist says, "Anything here? Do you see anything, feel anything?" Patient says, "Yeah, I feel like--" he looks perplexed, puts hand to forehead-- "I feel like I saw some other site. We were at the train station." Neurologist cleverly says, "So it feels like you're at a train station?" Patient says, "Yeah, outside the train station." Neurologist-- "Let me know if you get any sensation like that again." Stimulates. "Do you feel anything here?" "No." And then he does it again. Did you see the train station or did it feel like you were at the train station? Patient, "I saw it." These are very sparse, precious data, but that's so telling. It's not that he knew he was at the train station abstractly. He saw it. So then, they stimulate again, right on those scene-selective regions. Patient says again, "I saw almost like, I don't know, like I saw-- it was very brief." Neurologist says, "I'm going to show it to you one more time." Really what he means is, I'm going to stimulate you in the same place one more time. "See if you can describe it any further. And to give you one last time, what do you think?" "I don't really know what to make of it, but I saw, like, another staircase. The rest I couldn't make out, but I saw a closet space, but not this one." He points to a closet door in the room. "That one was stuffed and it was blue." "Have you seen it before," neurologist, "Have you seen it before at some point in your life, you think?" "Yeah, I mean when I saw the train station." "Train station you've been at?" "Yeah." Et cetera, et cetera. So it's not a lot of data. But it's very compelling. What is the patient describing? Places he's in that he sees, and then he describes this closet space and its colors. Interestingly, colored regions are right next to scene regions, so that's kind of cool, too. So it's causal evidence. It's sparse. Ideally, we'd like more in science, but it's pretty cool. Yeah. AUDIENCE: At this point, the patient is just staring at a blank wall? NANCY KANWISHER: I actually forget in the paper. I've got to go look that up. I forget exactly what the patient was doing, whether-- I think he's just in the room looking out. Usually, they don't control it that much because it's done for clinical reasons, and the patient is in their hospital bed, and they're just stimulating. So he's probably just looking out at the space he's in. In fact, he must have been because at one point, he says, "The closet, not like that one over there." So if he was staring at a blank thing, he was also looking out at his room. So yeah. AUDIENCE: This may be a little bit off topic. You said that the region for color perception is very close to this, it seems like. Is there any relationship between functional proximity and-- NANCY KANWISHER: That's a great question. Nobody in the field has an answer to this. People often make hay about the proximity of two regions, like there's some deep link because this thing is next to that thing. The body selective region is right next to, and in fact slightly overlapping with, area MT that responds to motion. It's like, bodies move. Well, faces move and cars move, too. So I don't know. It's tantalizing. It feels like it ought to mean something. And people often talk as if it does. And maybe it does, but nobody's really put their finger on what exactly it would mean. But it's useful. So when Rosa Lafer-Sousa who you met in the color demo, and I showed that in humans, you get face, color, and place regions right next to each other in that order, that was really cool because Rosa had previously shown that in monkeys, the monkey brain it goes face, color, place in exactly the same order. And so we thought that's really interesting. That suggests common inheritance because that's so weird and arbitrary. Why would it be the same? So it can be useful in ways like that, at least. So we just went through all of this. So how does this go beyond what we knew from functional MRI? I'm insulting your intelligence. You know the answer to this. It goes beyond it because it tells you-- it implies that there's a causal role of that region in place perception, some aspect of seeing a place. Now, all of this about the PPA I just started in there because it's nice, and concrete, and easy to think about. But no complex mental process happens in just one brain region. Nothing is ever like that. And likewise, scene perception and navigation is part of a much broader set of regions. So if you do a contrast, scan people looking at scenes versus objects, you see not just the PPA in here. Again, this is a folded-up brain, and this is the mathematically unfolded version so you can see the whole cortex. Dark bits are the bits that used to be inside a sulcus until it was mathematically unfolded. So there's the PPA kind of hiding up in that sulcus. And when you unfold it, you see this nice, big, huge region. But you also see all these other regions. Now there's a bunch of terminology and don't panic. I don't think you should memorize everything about each region. You should know that there's multiple scene regions. You should know some of the kinds of ways you tease apart the functions, and some of the functions that have been tested, and how they're tested. But you don't need to memorize every last detail. Because it's going to get a little hairy. So here's a second scene region right there called retrosplenial cortex or RSC. And actually, Russell Epstein and I saw that activation in the very first experiments we did in the 1990s, but we really didn't know what we were doing back then. And we knew that this is right near the calcarine sulcus. Remind me, what happens in the calcarine sulcus? What functional region lives in the calcarine sulcus? It's just a weird, little fact, but it's kind of an important one that we mentioned weeks ago. V1, primary visual cortex-- that's where primary visual cortex lives. And remember, primary visual cortex has a map of retinotopic space, with next door bits of primary visual cortex responding to next door bits of space. And in fact, that map has the center of gaze out here and the periphery out there. So when Russell and I first saw that activation, we had the same worry that Cooley mentioned a while back. And that is the scenes are sticking out. There's stuff everywhere. The objects, there isn't that much sticking out. And we thought, oh, that's just peripheral retinotopic cortex. But it's not. It's right next to there and it's a totally different thing. And it turns out to be extremely interesting. You don't need to know all that. It's just silly, little history. There's a third region up there that's on the outer surface out there that used to be called TOS and is now called OPA. I'm sorry about that. You don't need to remember this. Know that there are at least three regions. But TOS slash OPA is interesting because there's a method we can apply to it that we can't apply to the others. What would that method be? AUDIENCE: TMS. NANCY KANWISHER: Yeah, TMS-- it's right out on the surface. You just stick the coil there and go "zap." So of course, we've done a lot of that. Can't get the coil into the PPA or RSC. It's too medial. And there's another region that we'll talk about more next time called the hippocampus. You saw the hippocampus when Ann Graybiel spent all that time digging in the temporal lobe to find that bumpy, little, dentate gyrus, approximately right in there. And so all of these-- and probably other regions, but these are the core elements of the scene selective regions that are implicated in different aspects of navigation. So when you have multiple regions that seem to be part of a system, that's an opportunity. Because now we have the possibility that maybe we could figure out different functions for different regions. And then maybe that would really tell us more than just scenes and navigation, end of story. It's kind of rudimentary. It would be nice if different aspects of the navigation story engage different parts of the system. So really what we want to know is, how does each of these regions help us navigate and see scenes. And I'm not going to answer that fully. The field is still trying to understand all of this, but I'll give you a few tantalizing little snippets. So let's take retrosplenial cortex right here. So this is first the response of the PPA right there, and retrosplenial cortex, which is just behind it. This is just its mean response to a bunch of different kinds of stimuli, showing you that it likes landscapes and cityscapes, scenes, more than a bunch of other categories of objects. And that's true of both the PPA and RSC. No surprises here-- they're both somewhat scene selective. But then in a whole bunch of other studies summarized in this graph here, Russell Epstein and his colleagues had subjects engage in different tasks while they were looking at scenes. In some tasks, they had to say where they were. He's at UPenn, and he showed his subjects pictures of the UPenn campus. And they had to answer all kinds of questions about what part of campus they were, where they were on campus, and also about which way they were facing given the view of the campus they were looking at. Then he also showed people familiar scenes and unfamiliar scenes, much like we did with our Tufts study. And he had object controls. And you can see the PPA doesn't care about any of that, doesn't care, really, if they're familiar or unfamiliar, doesn't care what task you're doing on the scene. You're looking at a scene, it's just going. So we didn't really tease apart functions there. But RSC responds differently in these conditions. It's engaged in both the location task and the orientation task. It responds substantially more when you look at images of a familiar place than an unfamiliar place. So this is the first time we've seen that in the same network. And so now, think about all the things you can do when you're looking at a picture of a scene and you know that place. You have memories of having been there. You can think about what you might do if you were there, how you would get from there to someplace else. All of those things are possible things that might be driving RSC. Another thing that might be driving RSC is that if you're looking at a picture of a familiar place, you orient yourself with respect to the broader environment that that view is part of. So what I showed you that picture of the front of Stata, you immediately imagine, I'm out on Vassar Street facing that way, roughly northwest, I think. If you look at a picture of a scene and you don't know that scene, it doesn't tell you anything about your broader heading in the broader world. So all of those are things that the RSC, its function seems to depend on knowing that place. Perhaps the most telling case comes from a patient who had damage in retrosplenial cortex. And the description in the paper of this says that this patient could recognize buildings and the landmarks, and therefore, understand where he was. So lots is intact-- can recognize scenes and know where he is. But the landmarks he recognized did not provoke directional information about any other places with respect to those landmarks. So this person can look at a picture and say, yeah, I know that place. That's the front of my house. But then if you say, in which direction is a coffee shop two blocks away, he doesn't know which way it is from there. So this should sound familiar. This is my guess of the bit that my friend Bob got messed up. Yeah. This is exactly his description-- he could recognize places, but it wouldn't tell him how to get from there to somewhere else. And so the best current guess about retrosplenial cortex is that it's involved in anchoring where you are. You have this mental map of the world, and you have a scene, and you're trying to put them together. Given that I see this, where am I on the map, and which way am I heading in that map? Again, think about the problem you face when you emerge from the subway in Manhattan. You look around. Where am I, and which way am I heading? That's what you need retrosplenial cortex for. How about this TOS thing? There's lots of studies of it. I'll give you just one little offering. So this is a causal investigation because as we discussed, the TOS is out on the lateral surface. So we can zap it. And so of course, we do. And so in this study, we were asking whether TOS is involved in perceiving the structure of space around you. So we took scenes like this from CAD programs, and we just varied them slightly. So for example, the position of this wall moves around, the aspect ratio, the height of the ceiling moves around, and we make this subtle morph space of different versions of this image. And then for control condition, we do the same with faces. We morph between this guy and that guy, and make a whole spectrum in between. And then in the task, what we do is here's one trial. One of the scenes or faces comes on briefly, and then shortly thereafter, you get a choice of two, and you have to say which of these matches that one. And then what we do is we zap people right after we present this stimulus. And so the idea is this is as close as we can get to a pretty pure perceptual task. How well can you see the shape of that environment or the shape of that face? You don't have to remember it for more than a few hundred milliseconds. So it's really more of a perception task than a memory task. And what we measure is, we actually muck with how different these two images are in each trial, and measure how far apart they have to be in morph space for you to be about 75% correct. That's the standard psychophysical measure. The details don't matter. But our dependent measure is, how different do the stimuli have to be for you to discriminate them as a function of whether you're getting zapped in TOS or not. And so here are the data. So let's take the case where you're doing the scene task here. What this threshold is, is again, how different the stimuli need to be for you to discriminate them. So the higher the bar, the worse performance. They have to be really different or you can't tell them apart. And so what you see is when you zap OPA, that lateral scene selective region, discrimination threshold goes up a bit. That means you get worse at the discrimination. The stimuli need to be more different. Compared to zapping the top of your head-- remember, you always want a control condition, and there's no perfect control condition because it feels differently to be zapped in different places. But getting zapped up here is a better than nothing control. And then here's the occipital face area. That's the lateral face region we talked about before when I showed you another TMS study. Basically, whenever there's anything lateral, we zap it because we can. And see, it's not affected here. Zapping the occipital face area does not mess up your ability to discriminate the scenes. However, in the face task, we see the opposite pattern. For the face task, zapping the occipital place area doesn't do anything compared to zapping the top of your head, but zapping the face area does. This is a double dissociation. If we just had the scene task, it would be like, yeah, maybe. Who knows. Maybe, who knows why. But it's not very strong. But when you have these opposite things, then we really have much more strong evidence that these two regions have different functions from each other. Everybody get that this is a double dissociation, in the same sense of when you have one patient with damage in one location and another patient with damage in another location and they have opposite patterns of deficit, then we're really in business. Then we can draw strong inferences. So we just said all of that. So that's just a little snippet. These and other data suggest that that region is strongly active when you look at scenes, and it seems to be involved in something like perceiving-- just directly online perceiving the structure of the space in front of you. So we already did retrosplenial cortex. And next time, we'll talk about the hippocampus in there, and its role in the whole navigation thing. Now, since I have ended early-- a rare event-- I actually put together a whole other piece of this lecture, and I thought, no, don't always have a part you don't get to. But then it turns out we do get to it. We're going to go over this more later, but we're going to start with this business right here. So anybody have questions about this stuff so far? OK, so I've spent a lot of time talking about multiple voxel pattern analysis, because it's the only method I've mentioned so far that enables us to go beyond the business of saying how strongly do the neurons fire in this region to the more interesting question of what information is contained in this region. But I also ended the last lecture with this kind of depressive note-- that you can't see much with MVPA applied to face patches, even when we know there's information in there with electrophysiology data. Remember, I showed you that monkey study where they tried MVPA in the face patches in monkeys and they couldn't kind of read out a damn thing. And then they try MVPA on individual neural responses of the same region, and they can read out all kinds of information. And that tells you the information is there and we just can't always see it with MVPA. Now today, you've seen cases where can see stuff with MVPA in the scene region. So sometimes it works, sometimes it doesn't. And when it doesn't work, we're left in this unsatisfying situation that we don't know if the information isn't there or if the neurons are just so scrambled together that we can't see the different patterns. So bottom line, we need another method. MVPA is a whole lot better than nothing, but we want to be able to ask, is there information present in this region even when we think the relevant neurons are all spatially intermingled? So let me just do a little bit of this and then we'll continue later. So goal-- this new method is called "event-related functional MRI adaptation." And we use it when we want to know if neural populations in a particular region can discriminate between two stimuli, two stimulus classes. So for example, do neurons in the FFA distinguish between this image and that image? So if we want to know that, we could measure the functional MRI response in the FFA and find this would be an event-related response, similar responses to the two. And as I just mentioned, that wouldn't mean that there isn't information in the FFA that discriminates that. It just says they have the same mean response. Everybody get that? Now, if we zoom in, and think about what might neurons be doing, it's still possible-- even with the same mean response-- that neurons could be organized like this, with some of them responding only to this image and some of them responding only to that image. But it's also possible that all of the neurons respond equally to both. And we kind of desperately need to know-- I mean, not in this case. This is a toy example, obviously. But we often, when we're trying to understand a region of the brain, we need to know which situation we're in. So that neural population can discriminate these two and that one can't. How are we going to tell which is true? Well, we talked before about multiple voxel pattern analysis, but as I just said, it only works when the neurons are spatially clustered on the scale of voxels. So imagine you have these situations here. This is getting more and more of a toy example, but just to give you the idea. Suppose where those neural populations land with respect to voxels is like this. So if each of these is a voxel in the brain, a little, say, 2 by 2 by 3 millimeter chunk of brain that we're getting an MRI signal from, if you have the different neural populations spatially segregated enough that they mostly land in different voxels, then MVPA might work here. Is that intuitive? Do you guys all see that? Then we'd get a different pattern in these voxels if we're looking at those two different images. But even if we have the situation here, which is kind of informationally the same, if they're spatially scrambled so that they're in roughly equal proportion in each voxel, MVPA won't work. Does that make sense? And so that's when we need this other method called "functional MRI adaptation." Make sense? I'm going to go one minute over probably. So the point of functional MRI adaptation is it can work even when there's no spatial clustering of the relevant neural populations on the scale of voxels. So let me go through it quickly and we'll come back to it later. So here's how it goes-- the basic idea is, any measure that's sensitive to the sameness versus difference between two stimuli can reveal what that system takes to be same or different. So for example, if a brain region discriminates between two similar stimuli like these, then if we measure the functional MRI response in that region to same versus different trials-- so this would be a different trial. You present Trump and then the chimp back to back. That's one trial, compared to a same trial, chimp and then chimp. And of course, we counterbalance everything, so we also do chimp and then Trump in another different case and then Trump and then Trump in another same case. If we find that the neural response is higher when the two stimuli are different than when they're same, then we know that that region has neurons that respond differentially to the two. So remember, we started with a case where the mean response is the same to this image and this image if you just measure them alone. But now we want to know, do we really have neurons that respond differentially? So we're using the fact that neurons are like people and muscles. If you keep doing the same thing to them, they get bored. Been there, done that. So you present this back to back. You get a lower response than if you present this and then this. That's called "functional MRI adaptation." It's like that waterfall MT adaptation we talked about before, but just crammed into a fine time scale. And so then if you do that, you can ask what a region thinks is the same. So then, we could ask, what about these two images? Does it think those are the same? And if we find a response like that, what have we learned? So if these two respond like that, what have we learned about a region that shows? This is all fake data, obviously, but if we saw that, what have we learned? And then I'll let you go, as soon as I get a nice answer to this. Yeah. AUDIENCE: So if it's the same between two pictures of the same stimuli, that means that it's activated. It can discriminate. But if the yellow is at the same degree as the red, it would just be the brain reacting to different pictures. NANCY KANWISHER: You totally get that. It's probably right, and you totally get it. Key point-- just because I don't want to torture you guys and go way over-- but key point is, it's the same response is the lower response. We tell that with this case, and we actually give it a same one. So same is lower than different. That's just how this method works. Then we're basically asking, does that count as the same to this brain region? And we're finding, yes, it does. That tells us that those neurons are invariant to all kinds of things-- viewpoint, facial expression, when he last dyed his hair, who the hell knows, all these other things. So we'll talk more about this. But the idea is, now we have another method in addition to MVPA that can start to tell us what neurons are actually discriminating. OK, sorry to go over. |
MIT_913_The_Human_Brain_Spring_2019 | 21_Brain_Networks.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: Before we get on to the topic for today, I felt like last Monday's lecture was not my best. I don't know why. It's not that I didn't put time on it. I looked and I had the wrong lecture numbers on slides. There was all kind of chaos. I'm sorry about that. Sometimes you put in a lot of effort, and you still give a lecture that isn't all that clear. So let me try to tell you what I thought were the main points. I started off by saying why it's really fundamentally important to be able to understand not just what people look like from the outside, but what we really care about people is what's going on the inside about their thoughts and their beliefs. And we are constantly making inferences about what people know, and believe, and want, and think. And we do that all the time to understand why they're doing something and to predict what they'll do next. And so it's fundamentally important. It's the essence of being a human being in many ways. It's the essence of literature. And we do it all the time. And a classic way that people have studied false beliefs is-- or thinking about other people's thoughts is with the false-belief task-- the Sally-Anne task that I described. And the reason people use the false-belief task rather than just a belief task is if the beliefs are true, you can answer what somebody will do based on the world, not based on their mind. And so to unconfound the two, we use a false belief that's different than the state of the world. So we can be sure we're asking people what will happen next based on what that person believes. And through decades of use of the Sally-Anne false-belief task and variations of it, it's clear that there's a very distinctive developmental time course and ability to solve this problem. Five-year-olds solve it no problem. Three-year-olds systematically fail. And people with autism typically get-- pass this task late or not at all. High-functioning people with autism pass the task, just later, like 7, 8, 9 years, not five years old. OK? So that's the kind of behavioral background evidence that there's something distinctive about thinking about other people's minds. Then we considered whether there's special brain mechanisms. And I argued that, yes, there are. There's a bunch of them. But the most impressively selective one is the TBJ shown up there. And the evidence that it's specifically involved in thinking about other people's thoughts comes from the fact that that activation is a greater activation when you think about-- when you solve the false-belief type problems, when you think about another person's thoughts compared to when you think about a representation, a physical representation, like a photograph or a map, OK? So those are logically isomorphic problems. We have to always answer a question about a representation. It's just the representation is in somebody's head or it's a physical representation in the world. And in that difference, you get that region of the brain, OK? So that's cool because it's a very nicely designed-- it's not quite a-- nothing's a minimal pair, but it's a minimal pair in some respects, right? It carves out whether the representation is mental or physical, but it doesn't solve everything. And a suite of other tasks have shown that that region is actually specific in a whole bunch of other respects. It doesn't respond just whenever you think about a person. There are external properties. And most impressively, it does not even respond when you think about their visceral body sensations, like thirst, and hunger, and pain. So the TBJ doesn't respond to just thinking about any mental states of another person. It's specifically thinking about their thoughts and beliefs. And that's pretty damn remarkable, right? It's, how abstract can you get? And yet, here's a specific brain region for that very abstract, very specific thing. And the final little bit of evidence I showed you is that it also generalizes. It's not just about language because you can show people movies that have no words in them but that clearly show characters who must be thinking about each other's thoughts. And in those moments when the characters are thinking about each other's thoughts, that region turns on, more, for example, than when they're thinking about each other's pain, OK? Yeah. AUDIENCE: Did somebody look at what happens if other people think about me? Or if I think they thinking about me? NANCY KANWISHER: You mean if you're thinking about other people thinking about you? AUDIENCE: Yeah, exactly. NANCY KANWISHER: Yeah, I would assume that that region would be engaged. I'm sure there are studies on that. Because you're thinking about their thoughts, right? AUDIENCE: But more? Is it more interesting? NANCY KANWISHER: Probably just because it's more salient, right? I actually, oddly, in this class, I talk about attention at the end, which is weird because attention is an issue with every study. But most of the brain regions we've talked about, surely, including this one, are modulable by how strongly you're attending to something. So if something's really salient, or important, or you're really paying a lot of attention to it, you're going to get more activation. And if it's more interesting to think about what other people think about me than to think about what other people think about each other, you'll find some modulation in here, I'm sure. OK, and then finally, I talked about moral reasoning as a test case. It's not that the TPJ is selectively involved in moral reasoning. It's that many of the critical aspects of moral reasoning depend on what a person knew at the time. And so to use that information in moral reasoning, you need to pull that region in. So I realized I wrote that question ambiguously on the quiz. I meant to ask, is the TPJ engaged specifically or only in moral reasoning, to which they answer-- the correct answer would be no. But I didn't put the "only" in there. And I decided it was ambiguous, so if you said "yes," you got the points. Anyway, I gave several bits of evidence that, using the moral-reasoning case, that the TPJ is-- it's stronger evidence it's involved in thinking about other people's thoughts, first, that we showed that people with autism will have this difficulty in thinking about each other's thoughts. Even once they can pass the false-belief task, they put less emphasis, less weight on what the person knew at the time when evaluating the moral status of their actions, OK? It's not that they make a mistake or that they're unable to morally reason. It's a pretty subtle thing. It's just a small difference in how much they weigh what another person knew at the time, OK? That's also known as less forgiveness or less exoneration for accidental harm, right? You kick somebody by accident, and they go ouch, well, maybe you get a little bit of blame because you were a klutz and you should have thought or something. But it was an accident, so you should be exonerated. People with autism exonerate slightly less, right? OK. So we then talked about the fact that if you zap the TBJ with TMS, stick a coil there, you do the same thing. You slightly reduce the weight people put on what the person knew at the time in their moral evaluation of the person's actions. And then I showed this bizarre fact that, as I think Gisella asked, well, shouldn't the TBJ be different in people with autism? Yes, absolutely, according to all of this, it should be. But the basic univariate measures-- how big it is, how selective it is, where it is-- do not find a difference in that region with that contrast in high-functioning people with autism. That's surprising. But one possible answer to that is even though it's there and it's just as big and strong and all of that, it's-- that doesn't mean it doesn't represent different information. And I showed you an example that in typical people, you can decode from the TPJ whether the person is reading about another person's intentional harm or accidental harm. And you can't in people with ASD. OK? So that's my summary of last time. And then all of that was focusing very particularly on the most fancy, quintessentially human aspect of social cognition, which is this business of representing each other's thoughts and beliefs. But I pointed out at the end that there are also lots and lots of other facets of social perception and social cognition, many of which have somewhat selective brain regions, lots of them other parts of the brain, and we just didn't have time to filtrate in to that, OK? Hopefully that was a little bit clearer than I was last time. OK, so, so far, in this course, we've been focusing on all these bits of brain that seem to do very distinctive, often very selective things, OK? So the one we've just been talking about is that little guy right there. But we've talked about a lot of these things in here. And the field of human-cognitive neuroscience has invested lots of effort to find these things and try to characterize what each of them does. And that's pretty cool, right? This is all stuff we didn't know 20 years ago, and it's nice, and it counts as real progress, I think. But it leaves lots of things woefully unanswered. None of these regions can act alone, even though I've depicted them in a somewhat silly fashion, as nice little M&Ms on the brain. None of them act alone. None of them could act alone. They need information to process, so there has to be input to each region. They need to be able to tell other regions what they figured out or there's no point. And probably, as they solve a problem, as they conduct their computations, they're probably interacting all the time with lots of other regions. So we desperately need to understand not just that this patch does faces. There's lots more we need to know. And one of the things we really need to know is, what is it connected to, and who is it interacting with? OK? OK. So that's what I just said. And so that means looking at not just the cortex that we've been focusing on through this whole course, this dark matter that's on the surface of the brain up there. But today, we're going to do a figure-ground flip on the brain, and we're going to start paying attention to all that stuff that used to be background down there, all that white matter underneath, which is like a big heap of myelinated fibers that connect long-range regions of the brain to each other, OK? So you might say, OK, just wires-- who cares about the wires? That was my attitude for a long time. I've gotten over it. We desperately need to know about the wires for all kinds of reasons. So I'm going to go through a whole bunch of reasons. And there's a lot of little details, and I don't want you to panic. I just want to give you the gist of why this is worth paying attention to. OK, so first of all, white matter makes up 45% of the human brain. So that alone tells you it's not like some trivial thing. It's a big part of your brain. This is all the more interesting because that's not true in other animals. So I think white matter makes up a higher percent of the human brain than any other animal, or at least we're way up there. In mice, it's only 10%. And maybe that's a relatively uninteresting thing about scaling with brain size, but maybe it's something deeper about human brain-- what's special about the human brain and nobody knows. And here's a fun fact. If you took all the myelinated fibers in the human brain and you laid them out end to end, you could go around the world three times. So we've got lots of cableage sitting in here. OK. So I briefly argued before that you simply cannot understand the cortex without understanding its connections of one region to another. It's just crazy to study one little patch of the brain and not know who it's talking to and where it gets its inputs from. And as I just said, the pressing need for that knowledge is heightened by the presence of this map, which we didn't use to have. Now that we have this map, it's all the more important and pressing to know what the connections of those regions are. OK. So here's a nice quote making this point. This is Heidi Johansen-Berg and Matt Rushworth. They say, "Connectivity patterns define functional networks. The inputs to a brain region determine the information available to it, whereas its outputs dictate the influence that brain region can can have on other areas. Therefore, simply by knowing the pattern of inputs and outputs of a brain region, we can begin to make inferences about its likely functional specialization." So I think that's a nice quote. It makes the point that it's not just that we need to know the connections, but the connections and the function are bound to be deeply enmeshed. One constrains the other. Yeah? OK. Further, recall way back, which will quite possibly return on the final exam-- how do we define a cortical area? I gave you criteria for a cortical area. And one of the criteria was a distinctive pattern of connectivity, right? So it's part of the identifying properties of a cortical area is what it's connected to. And so that's another reason we should care. A third reason is if we knew of a given cortical area what its long-range connections were to lots of other regions, that connectivity fingerprint-- remember we talked briefly about connectivity fingerprints a month or so ago? That fingerprint, the distinctive set of connections of that region-- you can think of it as a signature of not just how that region differs from other regions in that same individual brain but how we might find a homolog of that region in another species. And that would be a very interesting thing to do. Wouldn't it be cool to know, is there a TPJ in macaques? Well, macaques can't solve an analog of the theory-of-mind task. Chimps-- we could debate a little bit. And narrow domains-- kind of sort of, a little bit, not really. Macaques-- no, OK? So is there a homolog? Is there a corresponding region that-- maybe we took that region, and we adapted it and made it work better so we could do better things with it? And if so, what is it doing in macaques, right? I mean, I think that's just a totally cool question. And in principle, one way to say what counts as "the same region" across species, which is kind of a weird question. They're different species, so what would "the same region" mean? One way to say what the same region means is to have a similar connectivity fingerprint, OK? So there are several studies that try to do that. I couldn't cram them into this lecture, but if you're interested in reading on it-- reading about it, shoot me an email. I'll send you some papers. OK. I also mentioned that the specific set of connections of a cortical region, particularly its inputs, play an important role in development. Remember the rewired ferrets? If you redirect the input to what would have been primary auditory cortex in a ferret and you have that input come in from the eyes, you can get what would have been primary auditory cortex to become a lot like primary visual cortex. So connectivity is important, not just in how a region functions and how we say what counts is the same across species but is probably also crucially important in the development of regions, OK? I also showed you evidence that the visual word-form area-- we can pick out exactly where it's going to land in an individual brain by the connectivity fingerprint of that region before kids learn to read, further evidence that connectivity determines later function. OK. As if this is not enough, other reasons to care about white matter is that disruptions of white matter are at the root of many clinical disorders-- dyslexia, autism, developmental prosopagnosia, amusia, all of these things and others, for all of them, disruptions in long-range white-matter connections have been implicated as possibly playing an important role in the etiology of that disease. Aging-- most definitely decline in white matter is prominent in aging. Sorry to say, there's a 10% decrease in white-matter fibers per decade starting at age 20. Use yours now while you have them. Let's change the topic. OK. There's a lot of talk about how white-matter connections may change with experience, and learning, and plasticity. And that's a pretty patchy literature. And it's not a very impressive literature. The classic thing you probably learned in 9.00, that when you juggle, you get changes in white-matter connections from juggling expertise. Maybe. Maybe not. There's some problems with a lot of that literature. So it's an interesting question, but it's not clear what the strong answers are. And finally, I don't know about circuit design. Probably some of you do. But I gather that people who think a lot about circuit design-- one of the key features you need to take into account is wiring length. You want to keep wiring length short, right? You have conduction delays. You have heating. You have space taken up in circuits. All of those things are bad in circuit design, and they're bad in brain design too. So a lot of reason to think that a major factor in the design of brains, especially human brains, is minimizing wiring length because wiring length is very expensive metabolically. You've got to maintain ion gradients across cell membranes. It's expensive developmentally. These damn things need to figure out where to go, and if they don't go to the right place, you have a developmental disorder. And so it's probably a real constraint on designs of brains. OK, so that was a whirlwind-- lots of reasons to care about white matter and connectivity. Oh, plus, at least in animal research, there's a whole suite of amazing new methods for looking at connectivity in animal brains. And [INAUDIBLE] can tell us more about that than I could because she's working in a lab that's right at the forefront of developing those methods. Someday, we're going to apply those methods to a human brain-- I can't wait-- and get the whole wiring diagram. OK. Anyway, so what do we know about the connectivity of these regions? Well, you may be thinking, don't we already know all this stuff? After all, I showed you this diagram way back. And you've probably seen it every damn course you take in this department. It's in most textbooks in the field-- the whole wiring diagram of the visual system. So don't we already know all this stuff? So what's the big deal? Well, here's the big deal. That's a macaque brain. And in macaque brain, you can get the actual answer to what is the actual structural connectivity of this patch of cortex to that patch of cortex. There's a whole bunch of methods, but traditionally, you inject some dye here that's uptaken by neurons that travels along axons that goes here. You kill the animal, slice up the brain, and find that tracer over here. And then those two things are absolutely connected. That's the gold standard. And that's the basis of most of those studies, that method and variations thereof. But we can't do that in human brains. And so we do not have anything like this information in human brains. Yes, David. AUDIENCE: When was this done? NANCY KANWISHER: Oh, that is a compilation. This was published, in, I think, 1991. But that was a compilation of heaps of studies that have been done before that. It was a big review article looking at all of this literature, where lots of classic neuroanatomy people would do these things where they would inject tracers, and slice up brains, and look in other places. And it was just a vast amount of literature that did that for many decades. It's sort of fallen out of favor, even though these things are, at least-- I don't know-- these are really crucial questions. Now people use other methods to do that. You can use all kinds of optogenetic and other methods to map connectivity in animals. Yeah. AUDIENCE: I have a question. [INAUDIBLE] NANCY KANWISHER: In here? Oh, that's a good question. Oh, it's probably dorsal and ventral pathways. Let me see here. Yeah. Yeah, the red ones-- this is another thing I didn't even really mention, probably let alone give a short [INAUDIBLE].. That was lame. But anyway, the visual-- high levels of the visual system-- we focused on the ventral visual pathway coming down the bottom of the temporal lobe. But there's a whole other visual pathway that goes up into the parietal lobe. Did I talk about that a little bit-- reaching and grasping? No, I didn't. Lame, lame, lame, lame. Anyway, a major part of the field I didn't get to. Anyway, it's a whole other part of the visual system that seems to be more involved in visually guided action. And they're actually very interconnected, but they're trying to emphasize that the dorsal pathway is at least somewhat separable in monkeys. But my point is this is monkeys where they have the gold-standard methods, and they can actually discover the real connectivity. Sadly, we can't do those things in humans. And in humans, we have only three methods, and none of them are very good. So we'll talk about them today anyway because this is such an important question, but the bottom line is-- this drives me out of my mind-- we basically don't know the connectivity of any of those regions for sure in human brains. And somebody's got to solve that. Maybe one of you will go invent a method that works in humans that helps us solve that problem because it's actually, I think, really paralyzing to our field. So I'll tell you what we do know, which isn't much. But, you know, beggars can't be choosers. OK, the first method has been around for a few years, and that's gross dissection. And I mean gross, like that kind of gross-- so only good for post-mortem brains, but it's really quite amazing. This is a bottom view of the brain back and front. This is actually a physically dissected brain. Like, it takes a real serious neuroanatomist and lots of fancy methods-- I mean, not fancy methods but lots of careful, precise teasing apart of bits of brain. And you can actually see these big fibers coming up here. So if this is the back of the brain and we're looking up like this, what do you think those fibers are connecting right there? Big fiber bundle coming from deep down in the brain up to right in there. AUDIENCE: Is that thalamus to [INAUDIBLE]?? NANCY KANWISHER: Bingo. Exactly. OK, so that's the LGN right there. And this is called the optic radiation. It's this huge cable of fibers that come up. OK, first, here's the optic tract. Actually, I forget. That's not-- I think this is the optic tract that's been snipped there. Then it comes up in here, makes a stop in the LGN, and then this big batch of fibers comes up right there to primary visual cortex, OK? Everybody got that? OK, so you can actually see it in dissecting a dead brain. OK, that's pretty cool. But what if we don't want to wait for people to die? Often, we want to ask questions about a person right now in their brain. Do they have this disorder? Are they at risk of that disorder? What is their connectivity? So for that, we have two methods, and I'll talk about these two methods in the rest of the lecture. The first one is diffusion imaging. OK, so I talked about this briefly before. But let me remind you of what the basic principles are. So here is a picture of the optic nerve with a bunch of axons oriented like that. It's a big cable with a whole bunch of little fibers in there. And the basic kind of biophysics is that water wants to diffuse more along this length, following the orientation of the fibers, than it wants to diffuse this way, OK? And diffusion imaging-- I'm not explaining any of the physics, but just take it for my word that what diffusion imaging does is give you a picture of the direction of water diffusion, OK? So you get a picture of a piece of brain, and it'll show you, for example, that right in there, all the fibers-- well, the water is diffusing this way. And over here, the water's diffusing that way, OK? And that's just what you see in a diffusion image, OK? And so the inference people make is if you have all those parallel lines telling you there's lots of diffusion like this, there's probably a big fiber bundle going like that-- and there is. That would be the corpus callosum. OK? All right. So this method works great for finding the big fiber bundles. OK? I'm going to dis diffusion imaging in a bunch of ways, but it is great for finding the big fiber bundles because in those big fiber bundles, axons are very parallel. There's a whole bunch of them, and you can really see it. OK. And so people have been using this for over a decade to find some of the major fiber bundles in the brain. So you may have heard of the arcuate fasciculus that basically connects language regions in the temporal lobe up to Broca's area in the frontal lobe, OK? It's a big bunch of fibers that go-- I guess in me, they go like this, boom, right? [INAUDIBLE] And you can see those guys with-- this is a distant reconstruction, but you can see those with diffusion imaging. Another one, the goes from the front of the temporal lobe up to the frontal lobe. You don't need to memorize these. I don't care about that. I just want you to get the idea of what you can see. Yeah, question? AUDIENCE: So these are discoverable without the person having to do anything [INAUDIBLE]?? NANCY KANWISHER: Yes. Yes. These are anatomical images. So in diffusion imaging, you don't do anything. You can sleep, actually. That's ideal because it's long and boring. Actually, it's not boring, but the scanner shakes like hell in a diffusion-imaging scan. It's pretty wild. We could charge admission for it. I don't know. I find it quite wild. Anyway, this is the inferior longitudinal fasciculus. As it goes down the temporal lobe. So when we talk about the ventral-visual pathway-- face areas, place areas, all that stuff-- this is the big fiber highway that sits right on top of that whole chunk of gray matter that does all the processing. And it's a big pile of fibers that go straight down the temporal lobe, OK? OK. And so here's more recent data. This is from Anastasia Yendiki over at MHG Charlestown over there. And she's developed this lovely piece of software that enables you to take diffusion images and identify, based on an atlas she's put together, 18 of the major fiber tracts in the human brain-- so nine per hemisphere. And this is just showing you some of the big ones. This is the inferior longitudinal fasciculus I just showed you and so forth. Yeah. AUDIENCE: So how does this compare to just a postmortem dissection? Could you see the-- NANCY KANWISHER: It's a good question. It's a good question. I don't exactly know. It wouldn't be easy. That thing I showed you works because you take the stuff off the top, and it's just kind of sitting there. But then you would have taken a lot of other stuff out, and you wouldn't be able to see that other stuff you had to take out. You know what I mean? So here, you can surf through and pick out any of these. So it's definitely going to be better. But which of these you can see with postmortem dissection I'm not sure-- some of them, some of the bigger ones but probably not all of them. OK, so here are some of the major tracts. OK. But you can do a little bit more than just find them with diffusion imaging. You can also characterize them a little bit. And this is a whole universe. There's people who spend their lives with all kinds of fancy measures, and I'm just going to tell you about the most common one. So recall that the whole deal with diffusion imaging is it's looking for orientations of maximum water diffusion. So some parts of the brain have a systematic set of directions of water diffusion, and other ones don't, right? Inside a ventricle, the water can go any which way. There's nothing determining which way it goes. So this is called isotropic because you have diffusion going equally-- in equal amounts in all directions and anisotropic, right? So it goes systematically more in one direction, one axis than others, OK? Diffusion can't see the axial direction, like left or right versus to left. It can just see that this axis is more prominent than this one, or this one, or this one, OK? So you're not actually seeing it move in a systematic direction. OK, so that's the basic signal. So what you can do is in a little patch of brain, you can ask not just, what direction has maximum diffusion, which is what I've been talking about so far. You can say, how much more does it go in that maximum direction than any of the others? Is it more like this or more like that? And you can imagine a whole spectrum in-between. OK, so you're just asking, how oriented is it? Is it totally oriented or just partially? And that measure is called Fractional Anisotropy, or FA. It's prominent enough in the field you should just learn this phrase. And you read any articles, especially any clinical articles, this is the first thing you'll see in any clinical papers that use diffusion imaging. OK, and so fractional anisotropy-- there's a bunch of fancy definitions. And we don't care. We just want the idea. It's just, to what degree is that little patch of brain in this little part of a tract that you've identified more like this or more like that, OK? Is it anisotropic or isotropic? OK? And so this has been used a lot to try to ask about the nature of fiber tracts in different groups-- young versus old, different clinical groups, autism versus typical, schizophrenia, you name it. Experience-- you train people up on a task-- do you change the FA, the Fractional Anisotropy, of some particular tract. OK? So it's just a characteristic of tract is, how oriented is it along the way? Is it super clean, totally oriented? Or does it have some isotropy mixed in? OK? All right, so this is all over the literature. Let me give you one cool example from Gabrielli lab that came out recently. So they identified the arcuate fasciculus that I showed you before, going from the-- basically, Wernicke's area curving around up to Broca's area up there so that you can identify it anatomically in each subject individually, OK? Now you've got it. You've identified which voxels are part of the arcuate fasciculus. And then what they want to ask-- what they wanted to ask is, is the integrity, or the characteristics of the arcuate fasciculus-- is that important for language-- for dyslexia? So they measured the FA along this tract. How oriented is it all the way along here? And then you get some kind of average. And they measured that in a bunch of kids with dyslexia and in a bunch of kids with no reading disability who are matched in other dimensions of non-verbal cognitive ability, OK? And what they found is the fractional anisotropy was higher in the typical kids than the kids with reading disability, with dyslexia, OK? And from that, they implicated that this connection may play some role in dyslexia. OK? It's not totally obvious because one would think this region is connected to that region-- those are languagey regions, right? It's not visual regions. You think of dyslexia as a problem seeing the letters and which ones are oriented which way. And this suggests that, at least, higher-level connectivity between language regions may be implicated, OK? Yeah. AUDIENCE: So this just talks about the architecture and not the per unit information being [INAUDIBLE]?? NANCY KANWISHER: That's right. AUDIENCE: There's no information [INAUDIBLE].. NANCY KANWISHER: That's right. It's just saying, where are the wires, and how organized are the wires? Period. Yeah. AUDIENCE: So they won't flow through some technique as-- if I can mess with it, like I'm measuring it, if there's some means to miss the [INAUDIBLE] diffusion, will that have any effect? NANCY KANWISHER: I didn't quite get that. Say it again. AUDIENCE: So there's this water diffusion that's happening that I'm going to measure. But what if I have some way to intervene and change the rate of diffusion some such? NANCY KANWISHER: I don't know how you'd do that. I mean, it's a pretty basic physical property-- diffusion of water and how it's constrained by lipids, right? AUDIENCE: But that shouldn't affect, like the function of information flow [INAUDIBLE]?? NANCY KANWISHER: Ooh. I have no idea. I mean, that's a biophysical question I don't know about. But notice, this is a pretty distant proxy. You're mostly looking at water between axons, not even within them. And so it's just a proxy for, how well can we see those fibers and how they're oriented, OK? Yeah, it's pretty removed from actual signals going along the wires. Anyway, so everybody get the sense that-- you know, it's one little finding. But it implicates something about that tract in dyslexia. OK, so that's interesting. But first, it's just correlational. Lots of things are just correlational. Most of the stuff in this class is just correlational. Same is true here. But a little more seriously, it's not totally clear what fractional anisotropy means, OK? So there's a real tradition of treating high-fractional anisotropy as if that's good. After all, we're in a fiber bundle. Shouldn't all the axons be oriented nicely in there and not all scrambled? Surely, oriented is good, and scrambled is bad. Well, maybe, but sometimes, fibers cross a fiber bundle. So you can have a fiber bundle like this with other fibers crossing it. And so when that happens, maybe that's good. And so people use FA as a proxy for good fiber. People say "fiber integrity" even. But there's a whole question about what exactly it means. People will spend their lives looking at the biophysics of fibers and all the different things that fractional anisotropy and the other measures might mean. And it's actually pretty complicated and unresolved. Another challenge with fractional anisotropy is it's extremely vulnerable to artifacts. All of diffusion imaging is extremely vulnerable to artifacts, OK? And I'm going to give you an example of a study we did a few years ago. So I was, for a while, trying to work on autism. I've, more or less, given up because it's, as far as I can tell, impossible. But back while I was still trying, we scanned a whole bunch of kids with and without autism with diffusion imaging, OK? And at the time, there were about 50 published papers, almost all of which said one of the things you find with autism is that there's an underdevelopment of long-range connectivity and an overdevelopment of short-range connectivity. And so then people would free associate with all kinds of speculations about, OK, this explains aspects of the autism phenotype. They can't put different ideas together because their connections across the brain aren't as good. And they're obsessed with little details because they have too many local connections and all kinds of-- suggestive, but very, very fuzzy ideas like that. So 50 papers pretty much all found underdeveloped lower fractional anisotropy and long-range connections, long-range tracts in autism, a very established finding So we went in not to raise hell but just to kind of replicate some of those basic findings while studying some other things. And, in fact, when we did what everyone else does-- that is standard analysis-- you collect your diffusion-imaging data, and you eyeball it loosely, and if it really looks terribly tainted with artifact, you throw that subject out. And, otherwise, you keep it, and you analyze your data. And you look at, here are the 18 fiber tracts that I showed you before. And you ask, which of those have higher or lower fractional anisotropy in kids with autism compared to typical kids? And the basic finding-- we replicated the usual finding. And that is, overall-- this is column A-- most of those tracts showed lower fractional anisotropy in the kids with autism than the typical kids, OK? Many of those differences were individually significant in individual tracts. Those are the ones with the asterisks. OK. So that's the standard finding in the literature, and we replicated it. However, we noticed that a lot of the data really seemed suspect. And we started measuring the amount of head motion between the kids with autism and the kids without. And guess what. Kids with autism move in the scanner more than kids without. And guess what. Diffusion imaging and fractional anisotropy in particular are highly influenced by head motion. So then we said, OK, let's get a little more careful. And so we did a more stringent analysis. And we looked at the kids. We had quite a few of them in each group. And we took the subset of kids who we could match for head motion, OK? So now we've got the kids with autism and the typical kids, but we've now got the subset we have to choose to match for head motion, OK? It usually means the typical kids who move a little more and the autistic kids who had slightly less head motion. That's what you need to do to match them. And when we do that, the usual pattern disappears. Now there's only a single tract that shows lower fractional anisotropy in the kids with autism than the typical kids, OK? The inferior longitudinal fasciculus. So that's worrying. But then further, we thought, OK since many of those kids we had, especially the typical kids-- many of them we had scanned twice. So we thought, OK, let's really make this case. This is clearly a problem in the field. In fact, it's a broader problem. Pretty much any comparison across age groups or clinical groups-- one group moves more than the other group. Uh-oh. What about the entire literature? Hundreds, probably thousands of published papers, essentially, none of which pay attention to this-- this is 2014. People have cleaned up their act since, but up until 2014, almost none of them paid any attention to this whopping problem. So we figured we better make this point salient because there's a lot of money and time being wasted publishing garbage. And we want to make the point saliently. So we took the typical kids who we had scanned twice, OK? And sometimes a kid will-- the same kid will move more in one session than another session. So we said, OK, let's compare the very same kids on the session where they move more than the session where they moved less. And you know what? We replicated the autism phenotype. Those were typical kids, not autistic kids. The point is head motion alone will reduce fractional anisotropy and will look a whole lot like a clinical disorder. And so every time you see that a clinical disorder is marked by some anatomical difference, your first thought should be, how carefully did they deal with head motion and other artifacts that are going to differ between groups? OK? So I say that not to dis the entire literature but just to alert you that these things can really matter. The paper from Gabrielli Lab that I just described-- I looked, of course, before I presented it in here, and they cited us, and they used our methods for matching head motion-- good for them. So these things are changing, and I think the field will start cleaning up its act. But it's amazing it took this long. OK. All right, so finding fiber tracts and characterizing them with fractional anisotropy are nice, but, really, what we want to know is what's connected to what, OK? Which of these things are connected to each other? Which other brain regions are they connected to? And so to find that out, we need to not just study white matter itself and the tracts, but we have to get out of the tracts and into the gray matter. So we need to start in a patch of gray matter and figure out where we can go by following those axons, OK? And so the method for doing that is called tractography. So there are many versions of this. I showed briefly these pictures before. I'm sure you've seen these. The simplest idea of what you do, leaving out all the details, is you start in some gray-matter region, some voxel in the gray matter. You want to know what it's connected to. You just follow those little orientations, and you can-- you see where you can go. OK? And so that's basically how you make this diagram. OK? You're just following those. You start over here you follow the orientation-- go do, do, do, do, do, do, do, right? OK? I mean, you do that in a computer, right? An algorithm does that, follows along-- ch, ch, OK? OK, so that's called tractography. And the idea's awesome-- how great to be able to see what's connected to what. And there are many, many thousands of papers that do this for good reason. We need to know what's connected to what. This is our currently best method for looking at the structural connectivity of different gray-matter regions to each other. And so you can ask, for example, OK, let's put a seed in the fusiform face area and see where it goes. Wouldn't that be cool? Right. Wouldn't it be cool? Unfortunately, it doesn't work. So I have to tell you that I don't know if I'm the best person to report to this because I'm not-- I've only been trying to do this for a few years. But, I've been collaborating with the best people in the world over there at MGH Charlestown who are working closely with us. And we can't get this thing to work worth a damn. And so now I'm actually confused whether the entire literature is garbage. I don't think it's entirely garbage. But I think it's full of overoptimistic evaluations of what you can tell from tractography because in our hands, we started with reality checks, put a seed in the lateral geniculate nucleus. Let's make damn sure we can get up to V1. Well, you can get up to V1, but you can get up to V2, and V3, and V4, as well, which are all wrong, right? LGN only goes to V1. Worse, you stick a seed next door in the medial geniculate nucleus, which is the part of the thalamus that goes up to auditory cortex, you also end up in V1. Wrong. Wrong. Wrong. Wrong. There's not very many anatomical connections in the human brain where we actually know the right answer where we can do these reality checks, but of the ones we know that we've tried, it doesn't work. And we're using the best diffusion-imaging scanner in the world. It's right over there. So maybe I'm doing everything wrong. But at the very least, I think there are a lot of problems with this method. This is not just me worrying about this. And many people have been worrying about this for the whole 15, 20-year life of diffusion tractography. And some of the challenges are, like, famous. So to follow those little orientations, you need to-- you can see, like there'd be lots of places where, OK, there's a bunch of different ways you could go. It's ill posed, right? So people use heuristics to constrain those solutions. And those heuristics are based on assumptions about how fibers bend in the brain, namely that they don't make really sharp angles, right? That's reasonable. Most of the time they don't, but sometimes they do. And in particular, when you're going from white matter to cortex, often, you make a very sharp turn. And so it's very, very difficult to figure out how to get from a given gray-matter patch into the underlying white matter exactly what the connectivity is. So that's one problem. Another famous problem with tractography is called the crossing-fiber problem. So imagine a bunch of axons somewhere in the brain that cross like this versus imagine a bunch of fibers in the brain that come up to each other and then go apart, OK? Everybody get this? The connectivity's totally different here-- no way you're ever going to distinguish those with diffusion tractography. So people try to get higher and higher resolution to see down those individual things, but they're not there. Yeah. AUDIENCE: Why would something like that happen? NANCY KANWISHER: Yeah, why would they do that? Weird stuff happens in the brain. So it's not incredibly common, but it's not unheard of. Yeah, remember, the brain wasn't dissolved-- designed now optimally to solve all the problems it needs to solve with the optimal solution from scratch. It evolved gradually over time. And so there are all kinds of weird things that are workarounds for pre-existing decisions that evolution made earlier. And so both brain and body have lots of bizarre attributes that aren't how you would design it from scratch. They're just the fix that evolution made at that point given what had already been fixed. And so there's weird stuff like that. Anyway, I mention this to say I'm more negative about diffusion tractography than probably anyone else because I've spent a lot of the last two years trying to do it, and it's big bust, and I'm cranky. So it's probably not as bad as I'm laying out. Plenty of people do it. They get some kind of answers out of it, but it's problematic at least. My best guess is that it is OK for fingerprints. If you're asking, OK, here's some patch of brain. How much does it connect to, say, these 85 other regions? And is that different than the fingerprint for this region? That's probably OK because a lot of those individual solutions might be wrong, and there's still enough left over to see a kind of difference. So I feel like you can-- conductivity fingerprints are probably worth doing. But, actually, just answering the question of, is there a structural connection from A to B-- I don't know. I can't get it to work. OK? OK, so I think I did all this before just to-- conductivity fingerprints-- do you remember this? You start with one place, and you measure how well you can get from each location to each of these other ones. And I showed you before that the work of Zeynep Saygin and a bunch of other people has shown that you can-- actually, in an adult, you can predict where that adult's fusiform face area is just from their diffusion tractography data alone because it has a distinctive connectivity fingerprint, OK? I don't want to go through all that again. Do you guys remember that, more or less, the gist? OK, so that just tells you that there is this systematic mapping between the connectivity of a region and its function. And connectivity fingerprints, despite all these problems I've been carrying on about, have enough signal left in there to predict the function of a region and maybe to say something about homologies across species. OK, blah, blah, blah. Right. OK. So where do we get? You can find the major fiber bundles with diffusion imaging. That's worthwhile. You can characterize fractional anisotropy. I don't really know what it means, but it means something. And you can find very approximate connectivity fingerprints good enough the predict function. OK, so that's worthwhile. But actual structural connections of one particular cortical area-- not very good. At best, it's a weak signal. So that's a drag. So let's consider the other method people have used to try to work these things out. And that's resting functional correlations. So let me describe where this story starts. This story starts with a paper in 1995 by Biswal, and this is the figure from his paper. So, first he had people move-- they had-- he had people in the scanner doing finger-tapping. So they're lying in the scanner. He's scanning their brains while they tap both fingers or not, or tap both fingers or not. And you get these-- it's hard to see-- these two little bits of motor cortex corresponding to the finger-motor region. OK, no surprise there. We're just mapping a little bit of motor cortex. But then he does something cool. He looks at the time course over that experiment in one of those motor regions-- in one hemisphere, and he looks at the time course in the other hemisphere when the subject is at rest, not doing anything, OK? Sorry, I left this out. You scan them doing this, and then you scan them just lying there going, dum-de dum-de dum, or whatever you do when you don't have a task, OK? And he finds that these very far-apart regions at rest, when the subject is not tapping their fingers, are extremely correlated. So that's very not obvious. These things are centimeters apart. The subject isn't doing anything in particular. You're not telling them what to do. And they certainly aren't tapping their fingers. So why are these two bits of finger-motor cortex going up and down in lockstep like that? Well, nobody knows, actually. I mean, this was however-- 20-some years ago, right? Still, nobody really knows why those damn things are going up in lockstep like that. But it's systematic, it's tantalizing, it makes you want to play more, and many people have. OK, so I would say we still don't know exactly why those things are going up and down together. But the pattern of brain regions that go up and down together has proven to be a whole fascinating window into the brain. OK, so that's our next topic here. OK, so here's another depiction of more exactly what you do. OK, so step 1-- you find a seed region in here, in left somatosensory motor cortex, OK? So that's that region there. You get its time course, OK? Sorry, at rest. You find that region, and then you scan the person while they're just told to do nothing in particular. You get the time course averaged over all those voxels at rest. There it is, OK? Now you take that time course. And you correlate it with the time course of every other voxel in the brain. And you say, show me all the voxels that are correlated with this region at rest. And you get this-- lots of systematic brain regions that are highly correlated at rest with that region you started with. Everybody get what we just did? OK, totally non obvious. Well, you might say, OK, fine. This is finger-motor cortex. This is the other one. That's what I showed you from Biswal before. But why this thing? Why this thing way down deep in the brain? Why that thing down in the cerebellum miles away in the brain? Why are they all in cahoots with each other. I like using "in cahoots" when talking about correlations because nobody knows what the correlations mean. So "in cahoots" is as technical as I think we should get. Yeah? AUDIENCE: --correlations without a time shift. Just [INAUDIBLE]. NANCY KANWISHER: Good question. Good question. Wouldn't we love to know about the time shift? But here's the problem. There should be a time shift because it takes a while to conduct down axons from here to here, probably a few milliseconds. But a few milliseconds we are never going to see with functional MRI. So, surely, there is a time shift, but this method can't exploit it, OK? Yeah, I'll just leave it at that. OK, but does everybody get what this map is? We've just chosen a seed region, a starting point just for the hell of it. And we've asked, what other bits of the brain are correlated with that region at rest? It's a pretty weird thing to do. And you wouldn't do it if you didn't find systematically replicable answers that are repeatable across subjects. And when that happens, you go, OK, I don't know what this means, but it's pretty systematic. Let's keep following the thread. OK? Question? AUDIENCE: What do you mean by correlated [INAUDIBLE]?? NANCY KANWISHER: OK, so let's do this again. You scan people moving their fingers. You find little-finger region here. Now you scan the same person just in the scanner. You say, I'm going to scan you for five minutes. Just close your eyes and don't do anything in particular. You lie there. I scan your brain. Now I take that region, which I found before. And I take the time course of that region while you were just lying in the scanner doing nothing, and I get some randomish-looking thing like this. Now I take that time course, and I say, let's see if there are any other voxels in your brain that were correlated when you were lying there with that time course. And I color them in, and there are lots of them, even regions that are far away. OK? It's really not obvious. You wouldn't have predicted this would happen, yeah? Yeah? AUDIENCE: So if you look at the brain just as some underlying resting rhythm and like just all regions of the brain just have some resting rhythm, wouldn't it be just always be [INAUDIBLE]?? NANCY KANWISHER: Yeah. OK, so there's been a whole suite of speculations of exactly that kind. Are there endogenous rhythms that are characteristic of particular brain regions and so those things go together? Maybe, but so far, that doesn't seem to be the main answer. For a long time, people thought, OK, is it just blood-flow supply? Maybe the blood-flow supply to the brain branches and feeds those regions, and that somehow regulates the bold response in those regions. There have been many accounts like this, and none of those seem to really capture it. It really seems like, probably, those neurons are firing in sync with each other, right? Yeah, question, Nava? AUDIENCE: Yes, before a question in direction-- if you have a time delay, I guess the question was because if you would have a time delay, you could see what's further right. Can you, instead of the time delay, see if you measure it in one of the regions that seems to be-- seems to be correlated-- if you measure from one of those, if you could estimate the distance based on how strongly they correlate [INAUDIBLE]? NANCY KANWISHER: Oh, with the thought of maybe there's not a time delay, but maybe you lose some of your correlation with distance. You could. But just looking at this, these guys are pretty far apart. So it's certainly not that there's just things are-- that nearby things have a similar. AUDIENCE: No, but I mean, if you would say those are five different regions, did you measure from region 1? NANCY KANWISHER: Yeah, yeah, I got you. Yeah. Yeah. Yeah, you could. I think that's not going to work because there are big, big correlations that people find between very distant regions. I'll show you more. Yeah. I mean, you could try that, and I'm sure people have done that. I can't tell you exactly where. Actually, they do it as part of their-- one of the common ways you normalize your data is to take this and normalize it for distance from the seed region, which would be a way to build that factor in. And once you do that, you still get lots of stuff. AUDIENCE: They take the distance and the image space for that, right? NANCY KANWISHER: You can do it different ways. There's an algorithm that somebody at MGH wrote that is distanced by, most likely, a white-matter path or as the crow flies, not that the crow can fly straight through the brain, but you see what I mean. Yeah. Sorry, go ahead. AUDIENCE: Is the result different in if you measure the correlation when they're doing the finger-tapping action versus [INAUDIBLE]?? NANCY KANWISHER: Yeah. OK, so this is a really important question, and it's a whole part of this field that I'm leaving out of this lecture because I'm sort of suspicious of it. But your question is a good one. So you're saying, would they be correlated while you're finger-tapping? Well, certainly, if we did the paradigm while they're tapping both fingers, they're going to be correlated because we built the correlation into the task. We said, while you're doing this, do that. And so they will surely be correlated. And so there's a whole enterprise where people try to factor out those things and ask, even after you account for the activation of the task, are there changes in these patterns of correlation with the task you're doing? And that's called PPI for physiological interactions. And lots and lots of people do it-- hundreds, thousands of papers. It's probably pretty respectable, but it drives me nuts because I don't feel like there's any way you could know that you're fully accounting for the task. And so I think those correlations may be largely reflecting regions commonly activated by the task, and that's why I didn't put that in this lecture. But surely, task will also produce correlations, right? Let me just put it another way. If I flash up a bunch of faces versus-- it's like faces versus nothing-- and then we look at the correlations during that period, well, you'll find correlations between V1 and the FFA because when their face is on, both V1 and the FFA turn on. And when there aren't, they both turn off. That's just a task response, right? So to be able to look at how these endogenous correlations are affected by task, we would have to be certain we could siphon off the entire task effect so that we could look at just the residual. And I don't think any of our analysis are good enough to siphon off an entire task effect. And that's why I just don't go there with PPI, even though everyone else does. If you didn't follow that, it doesn't matter. I'm just trying to give you an answer. I'm going to take just questions of clarification now because there's a couple of things I really want to get to and I'm running out of time. OK. But everybody should understand this-- an activation map that's made by asking, which brain regions are correlated at rest with a given region I choose a seed region I choose? OK. OK, important caveat-- even though people call this "resting functional connectivity," we will not be using that phrase in this class because we do not know that it's connectivity in the structural sense. It's just a correlation, OK? And I'll say more about that later. But if you read about resting functional connectivity, it's the same thing. It's just, I think, people are making a mistake using that word. OK, so let me get this idea across here. You may have heard of the default-mode network. There's heaps of papers on this. It's a thing. There's a lot of discussion of it. And it's bizarre. It has arisen from two independent findings, OK? So let's do these findings one at a time. The first one is people started noticing around 15 years ago that across lots of different kinds of tasks, if you looked at not the intended direction, like, say, reading sentences versus staring at a dot, or doing a demanding working-memory task versus a really passive-viewing task, anything where there's a really engaging task versus an easy task, you would find a bunch of regions that were activated in the reverse contrast, regions that are more activated when you're doing less mental activity, typically, regions that are active when you're just lying there at rest compared to doing something difficult. And so originally, people were like, what's up with that? How can that be? It seemed paradoxical, impossible. But, in fact, it's not impossible, right? Suppose I had you do a bunch of mental-arithmetic tasks, and they're pretty demanding. And I compared that to just having you lie there in the scanner doing nothing. It's like, OK, do mental arithmetic for 20 seconds, rest 20 seconds, mental arithmetic for 20 seconds, rest for 20 seconds. Now imagine we find parts of your brain, systematic ones, that are more engaged at rest. What might that mean? David. AUDIENCE: That part of the brain could be, I think, like daydreaming. NANCY KANWISHER: Daydreaming, exactly. Yeah. You can't turn your brain off. You don't turn your brain off at rest. You daydream. Absolutely. What else? What are the things-- some of the things you do when you daydream? What are the typical contents of daydreaming? I guess it depends who it is and what you're daydreaming about, but there are very systematic things people do. They recall episodic memories. It's like, oh, yeah, before I got in here-- you replay things that were happening. And what else do you do? You think about people? Why? Because we're social primates, and that's what we care a lot about. You don't think only about people. Some of you guys might be trying to solve a math problem that you couldn't solve before. But most people in the scan are asked to do nothing, are recalling events, which usually involve people, or thinking about people, OK? So this whole suite of brain regions that was called the "default-mode network" is just the regions that are more engaged when nobody tells you what to do than for a whole bunch of things when they tell you what to do. And so it's some weird mix of daydreaming and other stuff. And the interesting thing about it is they're reasonably systematic. So those are the-- I keep getting confused here. Hang on. Let me get this right. Did I label this backwards? All right, they're the green guys. Yeah, deactivated during demanding tasks. Yeah. There's too many negatives for me here. The green guys here-- does that look familiar, that patch? What does that look like kind of, not exactly but kind of? Sorry? AUDIENCE: Visual, the visual system. NANCY KANWISHER: It's sort of near the visual system, yeah. It is, but it's also like something else we've been talking about recently. Our TPT. It's a little further back, but it's right in the same region, OK? And here are these medial regions. There's a medial view of the left hemisphere, like, "take my right hemisphere out and look at the inside" view. That's this-- [INAUDIBLE] and sulcus, right? All these medial regions. It looks a whole lot like the social-cognition network that I talked about last time, that you identify with the contrast of belief task versus-- the false-belief test versus a false-photo test. So that's weird finding number one, that there's a systematic set of regions that are engaged at rest. They're called the default-mode network because they're what you do by default when nobody's controlling you externally. So that's finding number one, and there's finding number two. But first, Jack, did you have a question? AUDIENCE: Yeah. I was just wondering, does deactivated during demanding tasks necessarily imply that it is activated during not-demanding tasks? NANCY KANWISHER: We're not distinguishing between those. We're just taking those two conditions. You'd have to have some third baseline to figure out whether those two were different. And that's very problematic because we're are we having a problem saying, what counts as a baseline, right? So we'll just compare those two, OK? So that's weird finding number one. And it's not that weird when you think about it further because, of course, you're doing stuff when you're lying there, right? But the further finding that really put the default mode network on the map was when people started putting seeds in parts of that default-mode network up here and finding that they got the whole rest of the network at rest, OK? So all of those things are correlated at rest. It's not just that they're all activated at rest. Their time courses are correlated at rest. So, actually, what people mean by default mode network now is not, I took the reverse contrast and all the stuff that activated more for rest than task, I call that default mode. Actually, what they mean is I stuck a seed in there during my rest scans, and I took all the stuff that was correlated with that position because those pick out, more or less, the same thing. OK? So that's led to a whole lot of discussion about what the default mode network is, and what it means, and what we can learn from it. That's what all this says. I'm just trying to figure out how I'm going to do this because I'm going to run out of time. Maybe I won't run out of time. We'll just go for it. OK. So people started messing around with these correlations at rest. And they found that you could find other systematic sets of regions if you stuck seeds elsewhere. And so another systematic region, a set of regions, is all the hot-color ones, the yellow and red ones here. And so if you look in there, you see various things-- the interparietal sulcus, a bunch of frontal regions, Visual-Motion area, MT, or other visual regions down there. And they found that set of regions was strongly correlated with each other at rest. You stick a seed in here, and you get all that yellow stuff, OK? And then they looked at it, and they said, yeah, right, we've seen those regions engage. Whenever people do demanding-- potentially demanding tasks-- they've seen that before in other task contrasts. So all those things that turn on when you really have to pay a lot of attention and you're doing a really hard task-- all those regions do that, and they're also correlated with each other at rest. OK? And so there's this convergence of these two different lines of work-- task contrasts that just say, what makes a given set of regions turn on or off, and correlations-- which things are correlated at rest? OK? And so they're both converging here with these two different networks. OK, so I need to do a little sidebar on this other hot-color network, not the default-mode network but this other one. It was originally called "task-positive" because it turns on more when you do tasks than rest. I mean, that's a really vague statement, OK? But it also has lots of other names, and the name that we're going to refer to here is the multiple-demand regions, OK? Multiple demand comes out from another line of work that just converged with us. They're picking out pretty much the same set of brain regions. But "multiple demand" means lots of different kinds of cognitive demand activate those same regions. OK? So I can give you a difficult spatial working-memory task. I can give you a difficult perceptual orientation-judgment task, a difficult arithmetic task. In each of those cases, I can compare them to an easy version of the same task, and I'll get, more or less, those regions there, which are-- it's getting a little vague here, but they're pretty similar to the task-positive ones, OK? So this is both interesting and scandalous. It's scandalous because-- to me only, not to anyone else-- because unlike all the regions we've been talking about so far that have these very specific functions-- they just do face recognition or just theory of mind-- these ones will do anything, almost, anything difficult, OK? So whenever you engage in a difficult task-- I'm skipping over a whole literature-- it's a big literature on this-- but lots and lots of totally different kinds of tasks that have nothing in common other than they're very demanding-- you engage those regions. And in some ways, that's an even more fascinating puzzle. Like, what the hell would those operations be? What is in common between spatial working memory and arithmetic and line-orientation judgment and all the other things that have been shown to activate these regions when they're demanding? Nobody knows. I think it's a big, fascinating puzzle. Someday we'll have a computational story about what's actually computed in those regions, but we don't yet, OK? There's a lot of stuff on the multiple-demand regions, and they're interesting. But I can't resist one little thing. All right, I can't. I have no self-control because I'm not-- I don't have enough multiple-demand activity right now, so I'm going to have to just tell you these things because they're cool. OK, so a guy named John Duncan has spent the last 15 years arguing that the multiple-demand regions first are really truly multiple demand. He's tested lots and lots of different tasks, and they're very, very domain general. But second, he thinks they're implicated in fluid intelligence. Fluid intelligence differs from crystallized intelligence. Crystallized intelligence is stuff like your vocabulary, just stuff you've learned and cached away, and facts you've stored, and abilities you've-- specific abilities you've stored. Fluid intelligence you measure with stuff like Raven's matrices, where nothing you know is going to help you do it. You just have to be smart and see some abstract pattern or something like that. And so Duncan thinks that these regions are related to fluid intelligence. And one of his measures of that is if you find people with brain damage-- he had a big set of around 80 people who had brain damage who he'd been studying in all different parts of the brain. And what he found is if you have brain damage in those regions, your IQ goes down as a result of the damage in proportion to the amount of cortical volume destroyed by the damage. If you have damage anywhere else in the brain, your IQ is unaffected. You may become paralyzed, or aphasic, or prosopagnosic, or akinetopsic. You may have any of these very specific deficits according to where it lands, but it won't affect your IQ. And so the picture here is that in addition to all these special-purpose processors that this course has been focusing on, we have this thing that's kind of like the brain's CPU or something like that. And it seems to live in approximately those regions, and it seems to under-- it seems to be essential for fluid intelligence, OK? So I'm skipping over lots of literature just to heighten that this is a particularly interesting set of regions here, OK? Yeah. AUDIENCE: Do people study novelty? NANCY KANWISHER: Yeah. AUDIENCE: The regions that specialize in novelty. NANCY KANWISHER: These guys will be interested in novelty. These guys will be interested in novelty but not only. You can do the same boring but difficult task on, and on, and on, and they'll keep going. OK. The reason I went on that sidebar is that you can identify these regions, not just by scanning people while they're doing difficult tasks but by sticking a seed in any of those regions and getting the others, OK? That's the task-positive network pretty much, OK? So we're getting this convergence between sets of brain regions that we find with a task contrast and sets of brain regions we find by finding what's correlated with what. And the bigger picture of this whole thing is that I've been focusing on individual regions and what they do, and the gist of this whole resting functional-correlation literature is a very relevant level of organization of the brain is not just an individual cortical region but a set of cortical regions that seem to be in cahoots. And, again, we don't know what that means exactly, but they're correlated at rest, and they have something to do with each other, OK? So we're finding this higher-level organization, and the multiple-demand system is part of it. OK, so how are we going to look at this? So there's a bunch of these. I've talked only about the default mode network and-- it's another name for the same thing-- executive control. Don't worry about it. You think of that as multiple demand. It doesn't matter. But there's a bunch of them that you can find by sticking seeds in different places. And so yes, I just said the big idea here is that networks are an interesting level-- an interesting kind of unit in thinking about brain organization-- bigger than an individual region. It's a set of regions that have something in common, OK? But I've sort of backed into in this awkward way of saying, OK, here are things that are correlated with each other, and here's what we know about the same regions from previous task analysis. Most of the literature on resting functional correlation just looks at correlations and doesn't try to put it together with what we know about those regions, and that just seems deeply weird to me. So for years, I ignored this whole thing because it's like, I don't know what these resting correlations are. And if I don't know what they are, I'm not going to work on them. And then Idan Blank came along. And when he was a first-year grad student at Fedorenko, said, hey, we have all these resting-functional data. Let's have Idan, for his rotation, for a month, analyze the resting-functional data. I said, resting functional-- we don't know what the hell it means. Let's not bother. She's like, get over yourself. Let's let him play with it. Well, thank God she's not as stuck in her ways as I am because Idan spent just a month playing with some of our data, and what he found blew me away. So here's what he did. He said, OK, let's start with actually identified regions of brain where we know something about what they do, like the language system and the multiple-demand system. OK? Let's identify those regions in each subject individually, and then let's scan subjects at rest, OK? First, you scan subjects with sentences versus nonwords. You find the language regions. Then you scan them with a difficult-versus-easy spatial working-memory task. You find the multiple-demand regions. Then you scan them at rest, and you get the average timecourse from each of those regions at rest. These are fake data, just to give you the gist. Makes sense? Are you with me now? Now you can ask, OK, which of these things are correlated with each other at rest? And this is now a more interesting thing to do because we're asking this principled question of regions we know something about rather than random seeds in some random location, right? OK, so now what we do is we examine those correlations. And we just ask, for example, how correlated are those timecourses of two different language regions or those two different parts of the multiple-demand system? And how strong are the correlations between the systems, some little piece of the language system and some little piece of the multiple-demand system? Makes sense? So that's a cool question to ask, and that's what Idan did. And here's what he found. Let me first orient you. So here are lots and lots of regions of interest that were identified functionally-- a whole bunch of language regions up here, a whole bunch of multiple-demand regions down here. The details don't matter, OK? But what I'm going to show you is in each cell, we're going to have a correlation between a given-- so this would be a given part of the cell over here-- given part of the multiple demand system. Sorry, a given part of the multiple-demand system and some other part of the multiple-demand system, or over here, a cell would be some part of the language system and some part of the multiple-demand system, OK? So when you do that, here's what you see. Here are the correlations at rest between all of these pairs of conditions. And if you squint a little-- you don't even have to squint much-- but the black ones are the ones that are not significantly correlated at all. Blue means a negative correlation, and hot colors mean a positive correlation. And so what you see is here are all the language regions. These are the right-hemisphere language regions, which barely even count. They're just there for the hell of it. This is really the core language regions, and you can see they're all correlated with each other, even ones that are really far apart-- Broca's area and Wernicke's area-- 10, 12 centimeters apart-- strongly correlated at rest, OK? And if you look at different parts of the multiple-demand system, they're all strongly correlated at rest, even regions that are far apart-- something way up in the frontal lobe, something way back in the parietal lobe-- strongly correlated at rest. And so yeah. If you zoom in here-- so what this is really-- does everybody see that this is revealing a lot of structure? Yeah, question? AUDIENCE: So the diagonals would be, like, self correlation, right? NANCY KANWISHER: Yeah, that's why it's black. Actually, what is it? Oh, you know what? It's split in half. It's split in half. It's actually a better way to do it. You take your data, and you have two different halves of the data. And so it gives you a baseline for the-- no, that doesn't make any sense, does it? Never mind. AUDIENCE: Yeah, [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. AUDIENCE: [INAUDIBLE]. NANCY KANWISHER: It is maroon. AUDIENCE: If you look at the spectrum on the bottom, [INAUDIBLE]. AUDIENCE: Yeah. NANCY KANWISHER: All right, I'm going to have to solve this offline because I'm now confused, unless Anya can figure it out right now and bail us out. AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Yeah, but as they're pointing out, it's not black. OK. AUDIENCE: [INAUDIBLE] [INTERPOSING VOICES] AUDIENCE: I'm saying it should be high. I was just wondering [INAUDIBLE].. NANCY KANWISHER: Oh, it's a correlation of 1. Is that what it is? AUDIENCE: Yeah. NANCY KANWISHER: OK, because it's correlated with itself. All right. OK. Thank you. Anyway, does everybody get the gist that all of these different pieces of the multiple-demand system that we identified individually-- they're all correlated with each other at rest? All these different pieces of the language system are correlated with each other at rest. And there's no correlation at all between any part of the language system and any part of the multiple-demand system at rest. All of these-- OK, maybe a couple. These cells are all either black for not significant or inversely correlated. That's the cool colors. So do you see how this gives us a totally cool way aside from just the functional localizers we ran to identify these regions to show us that these things are functioning as a system, right? It's not just that Broca's area is a cool little thing that does a piece of language and some bit of the temporal lobe is a cool thing that does some piece of language. But those guys are part of a broader system, and these resting functional correlations are revealing this broader system and the integrity of the parts within it, as well as the distinction between those parts and parts of another system, or network. Does everybody get that idea? Good. That's a big idea for this lecture. I think what I'll do-- so you can-- I'll skip that. So this is basically the correlation between all of the cells within the language system, all of the cells within the multiple-demand system, and any pair of cells between systems here. So that's just averaging over the matrix I showed you before. And so this is a cool way to ask about broader systems in the brain. And I was going to show you some data published just a month ago that asked this question not just about the language and multiple-demand systems but also, about the theory-of-mind network, which is basically really similar to the default mode network. But the theory-of-mind network we can identify that we talked about last time. And you can go think offline. Actually, I'll take a suggestion. What do you think? Should the theory-of-mind network be correlated with the language system, with the multiple-demand system, both, or neither? AUDIENCE: Neither. NANCY KANWISHER: Neither? Why? AUDIENCE: This is something different. NANCY KANWISHER: OK, that's a totally reasonable answer, and that's largely true but not 100% true. What else might you think? It is a different system, and it does function quite independently but not perfectly. Yeah. AUDIENCE: Because it's [INAUDIBLE] analyzing [INAUDIBLE] NANCY KANWISHER: All right. That's a lovely speculation and an intelligent one, and it's half true. I'll just skip to the data. So in this very recently published paper, Alex Paunov, and Idan Blank, and Ev Fedorenko looked at the language system, the theory-of-mind network, which is not just the TPJ but these other regions that I mentioned briefly that you also get in the contrast of the false-belief test versus the false-photo task and the multiple-demand network. Same deal-- identify each of those regions in each subject individually, then scan the subject at rest and see what's correlated with what. And here's the answer. You see, again, replicate the language system, especially in the left hemisphere. The theory of mind system is all a system. And the multiple-demand system is a system, separate system. But if you look in at the cell where you have theory of mind and language, it's slightly above chance, not theory of mind in multiple demand but theory of mind in language, probably for just the reason you said, the whole essence of language, even though it's a different thing, than thinking about the contents of someone else's thoughts. They are so enmeshed in each other. The reason we have language is to take our thoughts and put them in your head and take your thoughts and put them in our head. And so it makes sense that those things are a little bit correlated. AUDIENCE: [INAUDIBLE] language? NANCY KANWISHER: Yeah. Yeah. But neither of them is correlated with multiple-demand. It's 1:26, so-- or-- wait. Am I reading the wrong-- yeah-- oh, 12:28. Sorry. There's a number here. That's how long I've been talking. So 12:28-- so if you need to go, that's fine, but I'm happy to answer questions. So go ahead. AUDIENCE: I was just going to say, there was a review paper about [INAUDIBLE] talked about how there's a behavioral connection between theory of mind and, like, when children started to learn [INAUDIBLE].. NANCY KANWISHER: Lots of links, especially developmentally, yeah. Yeah. Did you have a-- AUDIENCE: By an extension of that, [INAUDIBLE] should be correlated to the theory of mind [INAUDIBLE]? NANCY KANWISHER: Yes, wouldn't you think? Not really. AUDIENCE: [INAUDIBLE] NANCY KANWISHER: The FFA is irritating. It's not strongly correlated with the things it ought to be correlated with. |
MIT_913_The_Human_Brain_Spring_2019 | 20_Theory_of_Mind_Mentalizing.txt | okay so we are now going to do the third part in our sort of trilogy of uniquely human functions so starting a few weeks ago we moved from things that are shared with animals like navigation and basic visual perception very similar between humans and monkeys and uh other uh number all of these things that are shared uh with animals to talk about functions that are uniquely human and that includes music and language and today we're going to talk about the coolest and most quintessentially human mental function is and that is thinking about each other's thoughts so to get started thinking about this um let me remind you guys of what should be pretty obvious that we human beings are profoundly social in many many different respects so if you think about you know if you recount what happened in your day to someone else pretty much all of the elements of what you're encoun recounting our interactions with other people these are the things we care about these are the basic structure of our lives um other people are the source of our greatest happiness for most of us if you ask people at the end of life what matters most they'll say i shouldn't have wasted all that time working so hard and my job didn't matter and i didn't realize actually it's just other people are the only thing that matter interactions with other people or lack thereof are also the source of one of the deepest kinds of human suffering for many of us whether it's an everyday thing like a breakup with a or with a with a lover or a loss of a loved one or a family member um and also depriving people of social interactions this is like society's um kind of strongest form of punishment right is solitary confinement um interactions with other people um our failure to understand them is devastating in the case of autism where people struggle to understand what other people are all about and in terms of our amazing abilities as human beings most of what we know we learn from other people sometimes we figure stuff out on our own but if you take an inventory of all the stuff you know you read about it or somebody told it to you you learned it from other people and consequently some of the greatest feats of humanity from arts to science to everything else are products of people working together okay further many people have hypothesized that social cognition our ability to understand and deal with interact with other people is one of the strongest drivers of the evolution of the human brain so many have argued that the hardest problem we solve on a daily basis is understanding other people and it's also a very important problem and so that's a powerful natural selection force understanding other people and that has shaped the structure of the human brain not everybody agrees with that kind of extreme view that it's the primary driver but it's an interesting hypothesis and it surely played some role social cognition also is just a very large percent of human cognition and it's a large percent in terms of minutes of every day if you tally up different forms of social cognition this is one we're doing right now and we're about to engage in an hour and 20 minutes of it right here and probably when you leave this room you'll engage in other kinds of social cognition so it's just a large percent of what we do all day every day and it's also a large percent of what the cortex does so roughly that stuff and other bits are engaged in different aspects of social cognition so what exactly is social cognition to get a sense of what is entailed in social cognition which is a very complex multi-part multi-faceted enterprise i'm going to show you a um a short video of an interaction between an 8 a bunch of 18 month old infants and an experimenter and what i want you to do is just watch it it's very charming but also think about what these kids must know to be able to do what you're about to see can you see the screen there okay okay that little kid in the corner doesn't know this guy he's just been brought in for the experiment and he's just watching this weirdo doing this thing put the eye contact never met that guy before here's another case he's dropped a clothespin what is the matter with this guy [Laughter] looking more and more suspicious okay so what did you notice what are those little kids 18 months oh there's another one this is cute but we don't need it let's go on um what did you notice about um what those little kids were able to do what what is entailed in being able to do that what have they figured out lots of things yeah they understand intention yeah yeah all they're doing is watching this guy do these clumsy actions and he's not saying anything he's just grunting and they're figuring out what he intends absolutely what else yeah sort of being able to interpret the sounds of the grunts that footage was making so especially like the one completion like sound huh uh-huh yeah one could ask which of those things are more important actions or the sounds i don't think they took it apart here but it's an interesting question yep so they they understand some kind of vocal communication not language but some kind of intent definitely they'd be part of it yeah do you understand how to be helpful without coming in i didn't hear without sorry without like interviewing people coming in their way yeah yeah they know how to be helpful but not just they know how they have a will to be helpful remember these kids never met that stranger before this video started or maybe a minute or two before so it's fascinating not just what they're able to figure out but what they're motivated to do right that's a quintessentially human thing is to spontaneously help a stranger who you've never met before and are not related to right okay all right so but focusing more on the cognitive abilities um what what you need to do to figure out another agent's actions is to figure out what that agent is doing and that's externally observable you can watch you can use your perceptual system construed broadly to figure out what is this person doing in some general sense and that's important and and subtle and multifaceted but it's nowhere near enough really what you want to know is why are they doing that thing okay and that's not directly observable you can't see the why of a person's actions you have to infer it in fact to infer you have to infer a bunch of hidden mental states and those things are much more abstract than just you know i'm picking up the phone right you know that's just an action you can see it but if you try to think about why am i picking up the phone now we're in a whole different ballpark and so what you need to figure out to hit to figure out the agent's hidden mental states you need to figure out what can that agent see or hear okay and here it's really important to not lose a level here we're not talking about what we can see in understanding another agent we need to understand what what that agent can see about someone else okay so we're getting kind of meta here with multiple levels okay um we also need to understand the other agents desires and goals of what they want or what they're intending okay so how might we figure these things out um sometimes there are simple cues that might suffice so you might make an inference like if a person is reaching for an object they want that object okay so there the externally observable thing might give you a pretty direct inference about that it hurt the internal hidden state of what the what the agent wants okay but a lot of times that's not going to work right that's going to work in some cases but only a few and further the percepts and the goals that we've mentioned here aren't enough okay they'll work in some cases but humans do much more than that so let's consider this case here's and here's a um an agent who's doing some action and we can observe their body motions and what we want to understand is how this other agent watching them is going to infer things about this guy okay so we're interested in what's going on in the mind of this guy as they watch that guy okay now let's consider the case that this guy's trying to understand this is romeo here why did romeo reach for the bottle okay well to understand why romeo reached for the bottle you need to perceive and infer a whole bunch of things you need to see the hand reaching for the bottle to know what happened okay but then to really understand the case of romeo you need to understand that his intention is to drink the liquid in the bottle that's why he reached for it okay can't see the intention to drink the liquid you can only see the reaching then you further need to understand why is he intending to drink the liquid well because he believes the liquid is poison um i don't know why my pop-ups are all messing up here whatever he believes that julia is dead and he wants to die okay so all of that is part of what is entailed in understanding why romeo reached for the bottle okay and crucially the stuff in red these things are the beliefs and desires are not directly observable they're highly abstract okay and they're crucial they are the best way to explain and predict behavior so when we need to understand other people and what they're doing and why we're in this abstract space of beliefs and desires we can't see them directly we have to observe we have to infer them indirectly and they're essential because that's what we need to do to understand other people okay all right so we talked before about perceiving uh an agent um i'm sorry about inferring what an agent can can perceive and inferring the agent's desires and goals but as i just pointed out we also need to know the agent's beliefs what they think okay so all of this stuff in here inferring the hidden mental states of agents is what we mean by mentalizing inferring other people's mental states okay and the cases we're talking about here are inferring the percepts of another agent inferring the desires of another agent and inferring the beliefs of another agent okay all this abstract stuff okay no computer system can do this not even close see we can still say that we can't say that about object recognition anymore but we have this little precious reserve where we can still say that no animal can do these things except in very restricted cases so there's an active ongoing research program in a bunch of labs testing chimps and subtle tasks to try to figure out when they can infer things about the mind of another chimp or person and there are a few restricted cases where they can make some inferences about the mental contents of other agents but the current dominant view is that those inferences are very restricted to very particular situations usually involving competition over food and they don't generalize um in in any way okay so in this whole enterprise specific cues like making an inference that reaching for x means wanting x will help in some cases but will only get us so far okay we do much more than that and the question is how all right all right so inferring mental states to understand other people involves person inferring their percepts their beliefs and their desires okay um so how do we do this well a first question that we ask in this class and that's kind of a sensible thing to do is when we're trying to understand this whole space of how people think about another agent's percepts beliefs and desires we might just think okay maybe that's just part of generic cognition who says that's a separate domain of cognition that we should study separately maybe we should just study thinking in general thinking about objects thinking about physics thinking about people maybe it's all the same thing okay so let's stop let's start by considering is it really a separate thing from the rest of cognition okay well there's a classic behavioral paradigm in this field that's provided some evidence in this you may have heard about this in other classes but we'll talk about it because it's so kind of fundamental to all of the work on thinking about other people's thoughts and that is called the false belief paradigm it's really a way of testing beliefs and the reason you test false beliefs rather than true beliefs is if you ask about another agent's false belief true beliefs um then your prediction about what they're going to do next is confounded with what is true in the world and if you want to really tap into what's going on in their mind as opposed to what you know what would be true in the world you have to use a false belief to pull those apart okay so the classic way this is done the sally and task goes like this this is the sound there are many variants of it but this is the classic original version uh that appears in hundreds of papers um so here's sally and and ann and sally puts her ball in the basket okay you show this to little kids or or animals too versions of it we're doing a little kid version you say okay here's sally she puts her ball in the basket uh i don't know why my pop-ups are screwing up anyway then ann comes along sally leaves the room and comes in and moves sally's ball from the basket to the box and ann closes the top of the box then the question is when sally comes back where will she look for her ball okay where will she look for a ball in the basket right okay if you do this very simple question for three-year-olds they'll say in the box if you do it on five-year-olds they'll stay in the basket okay adults say in the basket as you guys say you have to think for a second right if you just kind of blurt out the first thing that comes to mind you'll behave like a three-year-old and say in the box but if you think for half a second you realize duh she doesn't know it got moved to the box she'll look where she thinks it is in the basket okay everybody with the program here okay so that's very very simple task it's the basic false belief paradigm and to make it more vivid i have to show you a video of rebecca sacks giving this task to a bunch of kids and talking about it it's a it's a delightful video and it'll give you a really vivid sense of you know how smart a three-year-old is and how much they understand and yet how fundamentally and totally they fail this task versions of this task come on here we go the first thing i'm going to show you is a change between age 3 and 5 as kids learn to understand that somebody else can have beliefs that are different from their own so i'm going to show you a five-year-old who's getting a standard kind of puzzle that we call the false belief test this is the first pirate his name is ivan you know what pirates really like what pirates really like cheese sandwiches cheese i love cheese yeah so my friend has his cheese sandwich and he says i really love cheese sandwiches when ivan puts his sandwich over here on top of the pirate chest and ivan says you know what i need a drink with my lunch and so ivan goes to get a drink and while ivan is away the wind comes and it blows the sandwich down onto the grass and now here comes the other pirate this parrot is called joshua and joshua also really loves cheese sandwich so joshua has a cheese sandwich and he says yum yum yum i love cheese sandwiches and he puts his cheese sandwich over here on top of the pirate chest so that one is his that one's joshua's that's right and then his went down the ground yeah that's exactly right now so he won't know which one is his spontaneous don't even have to ask evan comes back and he says i want my cheese sandwich so which one do you think ivan's gonna take i think he's gonna take that one yeah you think he's gonna take that one all right let's see oh yeah you were right you took that one so that's a five-year-old who clearly understands that other people can have false beliefs and what the consequences are for their actions now i'm going to show you a three-year-old who got the same puzzle and ivan says i want my cheese sandwich which sandwich is he gonna take he's gonna take that one let's see what happens let's see what he does here comes ivan and he says i want my cheese sandwich and he takes this one uh oh why'd he take that one [Music] so the three-year-old does two things differently first he predicts ivan will take the sandwich that's really his and second when he sees ivan taking the sandwich where he left his where we would say he's taking that one because he thinks it's his the three-year-old comes up with another explanation he's not taking his own sandwich because he doesn't want it because now it's dirty on the ground so that's why he's taking the other sandwich now of course development doesn't end at five and we can see the continuation of this process of learning to think about other people's thoughts by upping the ante and asking children now not for an action prediction but for a moral judgment so first i'm going to show you the three-year-old again i have been being mean and naughty for taking joshua's sandwich yeah yeah should i even get in trouble for taking joshua sandwich yeah so it's maybe not surprising he thinks it was meena viven to take the joshua sandwich since he thinks ivan only took joshua sandwich to avoid having to eat his own dirty sandwich but now i'm gonna show you the five-year-old remember the five-year-old completely understood why i even took joshua's sandwich was ivan being mean and naughty for taking joshua's sandwich [Music] yeah and so it's not until age seven that we get what looks more like an adult response should i even get in trouble for taking joshua's sandwich no because the wind chicken for switching sandwiches okay so everybody got the idea of the false belief tasks past passed in the four and five-year-olds not in the three-year-old and the three-year-old not only fails to get the answer right but when asked why it happened he comes up with a totally different account okay and you can see that three-year-old is not a dummy three-year-olds are smart and can do all kinds of things but they don't get this particular thing okay all right what about kids with autism well kids with autism either fail this task altogether or they pass it much later than neurotypical kids okay but now we've got to figure out why why would a relatively high functioning kid with autism say a seven-year-old who's got language and who seems to understand the question why would they fail this task why would anybody fail this task there's a couple of things going on here the essential element of the false belief tasks that we're interested in is that it requires you to attribute thoughts to another agent right that's what this task was made to tap but you might fail it not just for failing to attribute thoughts correctly you might fail it for other reasons you might fail it because this task involves this weird situation whereas there's the other agent's belief and there's a reality and they're different and that's kind of confusing it's like representing x and not x at the same time maybe that's just a particularly generically hard cognitive thing to do okay and the third possibility is that the true state of the world is so salient and dominant it may be hard to just inhibit that true reality in order to infer uh the the um the belief a belief is a kind of less visible salient thing than the true state of the world and so maybe just having an inability to inhibit dominant salient representations would infer interfere with your ability to do this task so people with autism or in fact three-year-olds might fail for any of these reasons so how are we going to figure out whether people with autism are actually failing this task because of their difficulty and in attributing thoughts well the way to figure that out is to come up with a really really clever control task okay this task was invented by debbie zaichik when i was in graduate school she was my office mate in graduate school and she came up with this task and i said that is really brilliant so we want to figure out whether it's a belief per se uh that's the the difficulty in attributing the thought or the belief to another agent that's the crux of the problem so in zach's false photo task the idea is it's a logically isomorphic task but it's about a physical representation not a mental one so what you do is you show sally putting her her ball in the basket uh and then you have a what was then like widely known to to kids a polaroid photograph a camera come along we didn't have cell phones back then so it's a polaroid camera takes a picture of sally putting her ball in the basket and then ann moves the ball from the basket to the box and then you ask the kid why where will um where will sally look for her ball i'm sorry where will where is sorry then you ask the kid where is the ball in the photograph okay sorry i screwed it up here's a belief version the kid watches sally and anne in this case you do a photograph version of it so you there's no um you're not asking about another person's beliefs you're using a photograph and you're showing sally put the ball in the basket basket you take a photograph of it and moves the ball to the box and then you ask in the photograph where's the ball okay you're not asking amount about a mental belief you're asking about a physical representation in the photograph okay so do you see how even though i bungled it on the first time it's really a logically isomorphic task you're in both cases you're asking about a representation that differs from reality but one's a mental representation that's where the kids with autism fail the task this is the physical representation kids with asd fail the false belief task not the false photo task and that rules out these other accounts because in both cases there's a representation that differs from reality you have the x and not x challenge right that's true of both of these um and in both cases the true state is kind of more dominant than the representational state in the in the belief or the photo and yet they have a problem with the um with the belief version not with a photograph version suggesting that this is the correct account of why they fail that makes sense was there a question yeah okay so all of that is consistent with the idea that attributing uh thoughts to another agent is a special distinctive thing that can be that's what i just said um so we have evidence that that that attributing thoughts is a distinct domain of cognition it's just not part of your generic ability to think about anything and we see that from the fact that typical children have this very systematic uh developmental time course where they get this at age five they don't get it at age three and we see it from the selective loss of this ability in kids with autism that they get the false belief task way later than they get the false photograph task was there a question here so the kids are never asked what does amber see that question then comes up in the fb task in the fourth photograph task um i'm sure people have done versions of that task but tell me what you're getting at here i spoke to do like an apocalyptic person i would assume that the final question remains the same so i would ask the kids what do you think and perceives uh given the change in the stimulus and then the gate response given the new media as to what the actor you're trying to get a tighter control to the false photo task is that yeah instead of where will she look for the ball yeah um but it's not really about what she perceives it's about where she thinks the ball is right and so the the false belief tasks have been done every which way from where does sally think the ball is to where will she look for the ball and you get the same answer in all of those so i think that's that's closer to where is it in the where is it in the photograph is really where does she think it is that's that's the that's the parallel yeah because she's not still perceiving it at that point yeah okay so that's behavioral evidence to suggest that there's really something different about inferring another person's thoughts and beliefs than there is from the rest of cognition the distinctive developmental time course and this kind of selective deficit and autism so what can we learn from functional mri is there a special part of the brain for making these inferences about other agents thoughts and beliefs okay so rebecca and i did this experiment 100 years ago where we scanned people while they were doing simple simple verbal tasks so we wrote false belief stories and sort of false photo stories generalized and we scanned people while they lay in the scanner and just read these simple descriptions and answered basic questions so you can try this right now here's a false belief story um your you read susie parked your sports car in the driveway in the middle of the night nathan moved her car into the garage to make room for his minivan susie woke up early in the morning susie expects to see in the driveway a sports car or a minivan yes sports car right exactly what she thinks is there not what is there the control um experiment that we wrote many many of these a volcano erupted on this caribbean island three months ago baron lava rock is all that remains satellite photos show the island as it was before the eruption in the photos the island is covered in rock or vegetation right okay so here um again it's just like debbie zach's asd control here it's you're asking about a representation in the satellite photo that's no longer true of reality and you have to distinguish the two so we scan people just doing these tasks okay and if you look in a whole brain group analysis that i was much dissing before but that's a good first step you see here's a slice through the brain like this you see a bunch of hot spots that respond more when you do these tasks than those tasks okay and there's a bunch of these regions deep down in the middle of the prefrontal cortex here that you can see in a slice like that here is a region called mpfc for medial prefrontal cortex it does all kinds of interesting things and it'll get mostly short shrift in this lecture it activates in this tab and for reasons you'll see in a minute activates in this task um oh god i'm pop-ups or have a mind of their own uh but here's another region the right tpj tpj stands for temporoparietal junction so in me if my temporal lobe is going down here my parietal lobe is here frontal lobe is there temporopridal junction is somewhere right around there okay everybody oriented and if we look at that region shown symbolically here what you see in the time course of response in that region doing this task is here's what happens in that region this is time here many seconds as they read and answer that task and here's what happens when they read and answer uh the false photo task so false belief tasks false photo task that just shows you the same thing as activation yeah you can see it evolve over time okay so um what happens in the npfc region oh god why is this not happening is that showing up oh there it is okay um right in the mpfc region you see a similar thing you get a response below baseline here we'll talk about that what it means to have a response below baseline in some conditions in the next lecture it's actually not all that mysterious it just means that baseline remember is usually lying in the scanner staring at a dot going dumb dw dumb or whatever you do when you're staring at a dot in the scanner and uh the key point for now is just that you also see a higher response to the belief than photo case okay okay so what's going on in these regions this activation this higher response to the belief than photo case even though i was touting what a brilliant control condition the false photo task is it is but it's still open to a bunch of different possibilities okay so that act those activations could be just thinking about a person after all our false belief stories all involve people and our false photo stories did not so maybe this isn't about beliefs per se it's just about thinking about people in general any any aspect of people okay or it could be any mental inference we make about the internal invisible states of a person okay any anything that might be going on in their head or it might be particularly attributing thoughts and desires to the person okay so how are we going to unconfound these with a new experiment and some new conditions so here's a new condition that refers just to a person not about their internal invisible states or their thoughts or desires subjects just read simple stories like i see what's happening it's not showing it's showing something different on the screen from what's on my slide here i have no idea why but anyway that's why i keep wondering what's there and what's not okay um okay so in the external case in this condition you lie in the scanner you read stuff like andrew just had a growth spurt so he was gangly and rather awkward like most teenagers he had bad skin and bad taste in clothes he wore mostly baggy jeans and flannel shirts okay so it's you know it's not riveting but it's not boring either you read a bunch of these they're interesting enough okay but no mental states there just outward appearance okay the next condition refers directly to the person's thoughts nikki nikki knew that his sister's flight from san francisco was delayed 10 hours only one flight was delayed so much that night so when he got to the airport he knew what flight was hers okay not even that interesting but it's talking about what he inferred what he knew beliefs okay but there's a third condition that refers to an internal state but not a thought or a belief refers to a visceral state sheila skipped breakfast because she was late for the train to her mother's by the time she got off the train she was starving her stomach was rumbling and she could smell food everywhere okay so this case is interesting it's vivid it refers to a mental state but not a thought it's a physical state hunger okay i mean it's a it's a mental representation of a bodily sensation okay everybody get the distinction between internal thoughts and and and feelings sensations okay so ah this contrast is consistent with any of these hypotheses and we're going to test those hypotheses with these three conditions okay so here are the three hypotheses we just described here and here are the three conditions i just gave and i want you guys to tell me what we're going to predict that we will see in the right tpj according to each hypothesis which namely which of these conditions will produce a high response and which will produce a low response okay so let's start with this case on the hypothesis that the right tpj responds whenever you're reading about a person what are we going to find which of these conditions is going to produce a height response all of them yeah absolutely so that's a prediction here all of them okay what about if it's interested in internal internal bodily sensations or or actually in in any internal mental states the last two absolutely both the visceral case and the thoughts case sorry that cell in that cell okay and what about if it's specifically about attributing thoughts to another agent just the last one right okay very good so what do we see so first of all here's just the localizer task always nice to look at your localizer and make sure it worked out so this is a response of the right tpj to the belief and photo condition just like i showed you before higher when you do the belief tasks and the photo task okay just a reality check key question what is the response of the right tpj in that main experiment with the three conditions we just described and here's what it does it's high only in the thought condition only when you're when you're reading about thinking about another person's thoughts and that's significantly higher than either the external properties or the um this uh or the visceral states and those two even though it looks like there's a little bit of a trend that visceral is higher than external that's not significant yeah um on the previous slide is it like thoughts and desires so like why can't like i was just confused about like the example for the bristol case for smelling food everywhere like couldn't that fall under it that's a good point yes it could yes you could infer a desire in that case absolutely yeah that's right um so yeah that's a good point it's intended to just identify the vivid visceral state uh and it's probably hard to do that with it without invoking some kind of desires here um yeah i would i would consider it a con a confound for that particular example um i'm guessing it was not so for the others in fact um it sort of couldn't have been given the way this came out right but good point okay so um okay so and if you do a whole group analysis of the whole brain and you say what bits do you get in a contrast of reasoning about another person's thoughts versus their external appearance or their visceral states uh their bodily sensations you get the right tbj again okay so everybody got this to suggest that the right tpj is extremely specific uh specifically interested in inferring another person's thoughts not even just their bodily sensation so it's not all mental states of another person that engage this region i mean i find this quite remarkable because that's so specific you know but before you hear about this you think okay thinking about another person's thirst or hunger or pain is that really different than thinking about their beliefs but oh yes it's different right the tpj does beliefs not thirst and hunger and pain yes yeah so there's a whole literature on this and um uh i think if you okay so first of all we need to like get the level straight when you're just thinking you're not thinking about thinking you're just thinking right so you're not engaging this region right so when we're talking about thoughts and we're talking about you thinking about another person's thoughts so the parallel would be you thinking about your own thoughts right so i think you know i i can't tell you exactly what the literature is on this i'm sure there's a few experiments but i think what you'd have to do is adopt that kind of meta perspective on yourself right a lot of a lot of our thinking is sort of thinking about thinking but not in a very explicit separable way i'm not being very clear on this but um you know i think if you asked if you ask people for example um you know when you saw this surprising event whatever it is you make something up what were you thinking uh you might engage that region i realize i'm not totally sure if there's literature on this heather is there there must be literature on this so they've done some inspiration you get mpfc for that yeah well go ahead well we just thought i'd talk about this and um it's kind of like insert but it's either so they're they're both two networks that are particularly clear but not just rtpga right rtbj and other regions yeah so you could get some rtbj when answering those kinds of questions okay so you get something in the vicinity i'm just trying to get okay so uh-huh right but the other part of this is i bet you have to do something quite explicit to do that because in some sense we're thinking about our own thoughts kind of implicitly a lot of the time right so i'm guessing that it's when you ask about it explicitly yeah i i forget details of the past but it was very great about this right next to you right okay anyway good question i'm sorry i don't have a totally adequate answer yet i'm guessing the literature doesn't have a totally adequate answer yet um okay all right um so recall that i uh mentioned that the uh the contrast between doing the false belief task versus the false photo task gets not just the rtbj it also gets this medial frontal region okay and that needle frontal region responds more to the belief than photograph task like this what does it do in this split between uh thoughts visceral thinking about other people's thoughts their visceral states and their external appearance it doesn't care okay so there's a real division between these different brain regions that are engaged when you do the false belief task the rtbj is very specifically interested in very specifically engaged when you think about another person's thoughts not when you think about their bodily sensations the medial prefrontal region is engaged in all three of these conditions okay so what it looks like is that of these three hypotheses anytime you think about another person that's true of the mpfc whereas the right tpj responds specifically when you attribute thoughts and when you and desires to another person okay okay now you may have been wondering about the fact that all of these experiments are using words it's kind of not like normal social cognition you're reading about all this stuff um and if these regions were really doing what we're claiming they're doing we should be able to find them in other situations where you make inferences about other agents beliefs so in more recent experiments they've been showing pixar movies to subjects in the scanner so you can just watch yourself make a few inferences about the agent's beliefs imagine you're watching this in the scanner and just let's just do a little bit of it to get the gist [Music] no words it's the key okay so there's a whole little microcosm of uh mental states here uh in other parts of this same six-minute pixar movie there's very vivid bodily sensations there's a porcupine who inflicts real pain with porcupine quills so there's bodily states and inferences about bodily states and inferences about mental states that go on and by um showing this movie to subjects in the scanner and then labeling which parts of the movie require the viewer to make inferences about the thoughts and beliefs of the protagonists and which require them to make inferences about bodily sensations like usually pain if you do that contrast you find the same set of regions the rtbj and its left hemisphere bit and you find some medial prefrontal stuff and you find some other stuff okay so what does that mean that means that you don't need words to identify this region you if you induce the same cognitive processes just from watching a a wordless movie um you show the same selective activations um and that's cool because it tells us that uh this is not about some kind of verbal reasoning it's really about the kind of deeper cognitive process whether you do it based on a movie or based on a bunch of sentences um and that's a powerful generalization we we like uh imaging results where the result generalizes across very different kinds of tasks and this this really strengthens the evidence that this is really what's going on in this region and it also means you can look for this region in kids okay and i didn't manage to fit that into this lecture but there's a whole other research enterprise where in rebecca's lab they've been scanning kids watching this pixar movie and asking how that region develops and they find that actually the region continues to develop even after age four it continues to develop and what happens is it doesn't uh it doesn't so much get bigger as it gets more selective in the younger kids you get activation both for um well it's just less selective anyway so uh this shows generalization so now we've shown um that we see the rtbj is selective for thinking about other people's thoughts we see that with false photo false belief versus false photo contrast we see that it's highly specific it's not just any thoughts you have about another person and we see it generalizes depicts our movies okay okay so let's consider now moral reasoning as a test case for theory of mind why moral reasoning well if you think about it reasoning about what's a morally acceptable or unacceptable action on the part of another person is all about what the person intended and what they knew okay so intent is very fundamental it's built into the legal system think about the difference between murder and manslaughter right they both involve killing another person but one is with intent and the other is accidental and our you know common moral reasoning and our legal system cares deeply about that difference okay um okay so for example i'm going to give you a moral reasoning task uh and just think about this and your your task is going to be to decide how morally permissible is the action described here um grace's action in particular okay so grace and her friend are taking a tour of a chemical plant and when grace goes over the coffee machine to pour some coffee grace's friend asks for some sugar in hers and there's white powder in a container by the coffee the plot thickens the white powder is a very toxic substance left behind by a scientist and therefore it's deadly when ingested in any form versus a butt the white the container is labeled sugar so grace believes that the white powder by the coffee is sugar left by the kitchen staff so grace puts the substance in her friend's coffee and her friend drinks a coffee and dies now your question is how morally permissible was grace's action on a scale from one to totally not okay morally forbidden to seven morally permissible okay so think about that on a scale and write down your number on a piece of paper you don't have to divulge it okay everybody got the question everybody decided more or less okay write down your number now consider a slightly different case slightly but crucially different case this case is known as the accidental harm case now consider the case where instead okay instead of being labeled sugar you get the same story but now the container is labeled toxic so grace believes that the white powder is a toxic substance left behind by a scientist nonetheless she puts the substance in her friend's coffee and their friend drinks a coffee and dies now consider how morally permissible is grace's action from one to totally not okay to seven morally permissible and write down your number okay okay how many people gave a lower number that is more morally forbidden for the second one than the first one okay if you didn't you probably weren't paying attention okay so you can see that it's the crux of the matter what grace believed when she did the action okay that's why we're talking about moral reasoning here as a test case of theory of mind because what the agent knew at the time of the action is of the essence and thinking about the moral status of their action okay everybody got that okay so this is a powerful test case and notice that also in the clip that i showed you from rebecca's ted talk she showed that um kids ability to use an agent's knowledge in doing what you guys just did that kicks in a little bit later than the standard false belief task so sometime after they get the basic idea of false belief they start to apply it takes a while to to kick in in this other case okay so this is known as intentional intentional harm as opposed to accidental harm okay it's just the terminology in the field um okay so what do you think will happen in autism if we ask people with autism these two questions and what do you think will happen if we apply tms to the right tpj it's right out there on the lateral surface just asking for it so it's a totally doable experiment it's been done what do you think happens okay let's take the case with autism how do you think people with autism will respond to these two questions the same as you guys did yeah yes char duel probably like the average distance between the average distance between how bad they think the second one just be less than yes exactly why um because they might not be able to make the distinction between grace knowing that the container was toxic versus the container being labeled toxic it's like both of them would be confounded which is like truth of how the women's absolutely absolutely everybody get that and share that intuition to the extent that autism is a particular deficit in understanding what another person knows or believes that's the only difference between these two cases to the extent that you have difficulty representing that you will have less of a difference in your moral judgment about these two things because you have a hard time representing that person's knowledge it's not that autism is a deficit in moral reasoning it's that moral reasoning entails thinking about other people at least these cases not all of it but these cases involve thinking about other people's thoughts and taking them into account and to the extent that that's difficult for you you will make less of a distinction exactly right yeah was there a question begging yeah okay okay what do you think will happen if you zap the rtbj with tms while subjects are doing these tasks yeah yeah yeah yeah that's the prediction if the rtbj is the main bit that's doing the inference about that's representing the beliefs of others then if you zap it you might change people's moral inferences more moral judgments that's pretty wild that's what happens except the rtpj and they make a smaller distinction between accidental harm and incident and intentional harm okay so both of those things are true i won't drag you through all the details of the experiments but the the basic findings from this whole line of work show that first of all neurotypical people agree as you guys did that accidental harm is more morally permissible than intentional harm okay and people with autism give less forgiveness for accidental harm compared to intentional harm than neurotypicals okay just because they that ability to represent the key knowledge that that tells you it's accidental is something they're not good at okay what is the role of the rtpj the data show that forgiveness for accidental harms first of all i left this out before it's correlated in neurotypicals with activation of the right tbj during moral judgment so if you just measure across a whole bunch of those moral judgment problems how strongly activated was your rtpg as you read that problem that's correlated with your ability to forgive somebody for accidentally harming someone again showing that there's a relationship between you're representing the thoughts and beliefs of another person and you're using that information to exonerate them from a harm they didn't intend yeah is it different for for thoughts versus external actions yeah um so um so first of all i'm i'm treating autism very superficially here it's an extremely hetero heterogeneous thing that varies not just you know along a spectrum you know which it clearly does but probably along many spectra and it's highly heterogeneous so these experiments are just done in high functioning adults who are you know totally past false belief tasks um they pass them later in life but they get to you know otherwise you can't test them on these kinds of experiments and the effects are quite subtle they're just a slightly lesser difference between accidental and intentional harm okay so just to clarify that which i probably should have said but your question is are the uh deficits in autism specific to thinking about thoughts rather than thinking about actions this is ongoing work but a lot of research has shown that that it it is more specific to thoughts and that a lot of the stuff you read about just basic perceptual um difficulties i mean most studies find that people with autism are a little bit worse at face recognition but not much worse in tasks asking about goals of actions like reaching for objects what is that person's intention mostly they don't find a deficit in autism so the perceptual basics seeing people and seeing what they're doing is much less impaired that's what the current literature suggests there's always the worry that you know we're not asking in the right way or testing in the right way um and the literature is highly inconsistent from one study to the next i used to work on autism and i just you know couldn't stand it anymore because every time a study is done it gets the opposite of the previous study i think because the population's so heterogeneous but from a gloss it looks like there's more of a deficit in the inferences about thoughts than inferences about actions yeah i just want to clarify so the autistic subjects are also explicitly told that grace believes lovable yes it's explicitly uh no it's exactly what i uh gave here uh yeah no grace believes yeah you're right grace believes yes explicitly and in spite of that it's known yep yeah yeah and so i think you can think of this as a subtle case these are people who pass the explicit false belief task but like the seven-year-old kids there's one thing if you're asked what does this person believe and another if you're asked a moral reasoning task for which you have to realize to bring the belief into account you know it takes more of your own kind of active was there another question what if you ask like instead of grace what did you say to me what if what you ask them like about themselves so you are you and your friends are thinking it's more of a chemical times and giving them the exact same scenario would that be any different um yeah that's interesting it's like the question about uh you know to what sorry question is suppose you ask people with autism this same question but it's not about grace it's about you you do all of this um i'm not sure good question probably somebody's looked at that yeah i'm not sure if they answered this already but uh do people with autism have something different in their art we're getting there you should absolutely be wondering that and i haven't yet and we're getting there good that you ask in fact that's probably my very next slide um okay um okay so causal role we showed the causal role of the rtpg in neurotypical subjects you zap the rtbj and you slightly reduce the difference in moral permissibility of the accidental harms and the intentional harms okay so all of these findings suggest that the rtbj is causally engaged in understanding the difference between intentional and accidental actions and that that ability is specifically disrupted in autism all of which leads to the natural prediction that genealogists made about the rtbj in autism so what do you think is rtbj affected in autism how many raise your hand if you think there's going to be something different in the rtbj and autism versus typical subjects okay raise your hand if you think there isn't not sure okay well you sort of both write in different ways okay so the answer is this so so um uh in rebecca sax's lab they did a study with a a really large number of typical subjects it's you know because of this heterogeneity in the autism population it's really hard to get a stable result you can believe so and it helps to have a really large it's hard to get enough autism subjects so we try and get as many as we can and the most you ever get in a study around here is you know 20 30 and that's a big struggle but it helps to have a really large neurotypical control population to reduce your error bars on what the neurotypical population shows so this study is probably the biggest that's been done they had 31 high functioning people with autism and 462 neurotypical individuals they didn't just go run 400 new subjects for this when you when you run that localizer task you've got it in every study you run and so then you can take all those localizer tasks across hundreds of subjects so um and so what they find is um both if you look at regions of interest analysis like find the rtbj and if you do a whole brain group analysis you find no differences between people with autism high functioning people with autism and typical subjects in the size location or response magnitude of the rtbj um okay when people do theory of mind tasks so you should be surprised everyone is surprised you all made the prediction everyone else made the prediction too um and that's pretty bizarre um but does that mean that the rtbj is not affected in asd chardool the information from the rpmpj is not used by other parts of the brain it's a great hypothesis maybe the information is in there and it's some kind of a disconnection thing and so it can't you know could be accessed by other processes right absolutely what's another hypothesis yeah is bill sorry in a different order what do you mean yeah but then we'd have to think about how the different temporal order would lead to the different behavioral outcomes right am i yeah david maybe he's given a different priority so like what you're processing there might not be as important to the person itself yep okay so it might be there but less salient or less important but how is that going to account for the lack of a difference between the asds and typicals remember you run your basic false belief versus false photo contrast and you find the rtpj and surprisingly even in this fairly large sample it looks the same in size location and response magnitude in the asds and in the typicals now i left something out this is this is high functioning adult asds who can now pass the false belief task right that's crucial because that's the task you're doing in the scanner they don't understand the task there's no point scanning them kind of going huh what right so these are people who are you know very high functioning and they can totally do the task right otherwise you can't run the experiment um so yes what are the behavioral differences well that's a very good question that's another reason that i stopped working on autism there's a whole there's a whole battery that you run to try to establish that you know these people officially really for the purposes of scientific study count as having autism and these people really officially don't at mit you need to run those studies on everyone because some of your control subjects end up in the other group um and um and and so those are a whole kind of battery of things from you know involving an hour-long interview with a trained person who tallies things like how much eye contact is made and what kind of give and take happens in conversation and all that kind of stuff i just don't i guess for the experiment because i mean yeah their behavioral response might not necessarily be hard to be changed yeah absolutely so yes it's possible that the rtpj is absolutely fine in people with autism and there's no difference as those initial results seem to suggest and that whatever differences you see with autism reside elsewhere but that's surprising given all the stuff i've said over the last hour about how uh at least you know um in the case of moral reasoning these same high functioning the same people who show the same activation of the rtbj have slightly different moral reasoning right so there's yeah like or so for moral reasoning for example could be what they base their morals off but might not be intended to survive like at that point in their life different idea of what more other people mentioned yep that's true yep never in this test um they function the same way the people who are being examined are yes in the same way yes the difference so why don't they test them without performing those good good exactly so we're in this funny position of we've sort of identified you know theory of mind um inferences as a as a critical difficulty in autism but these are people the only ones we can scan on that kind of task are people who can already pass it so we're already in a weird situation now you might have predicted that they could pass it based on other cognitive abilities right that they they um come up with another strategy and in that case they wouldn't be using their rtbj but another hypothesis is it just develops later they've got it there it is they use it it activates the same that's yeah and so your point is why don't they test some of the moral reasoning tasks right and see if it comes online right well the one that they can't function on because exactly maybe they don't know when to use them that's right that's right that's right it's a it's a very good suggestion that's probably been done and i don't know this heather do you know if people scanned asds doing the moral reasoning task probably leanne has done that right it's a really good suggestion and and i actually now can't think why they wouldn't have done that do you mind just going on pubmed and look it would be leanne young and it would be um yeah moral reasoning from fmri i bet she's done that maybe heather will get you this because it's a very good suggestion yeah why not test them on the thing that's different okay anyway um what i'm what i'm getting at here is you know all these hypotheses you guys are raising are very good ones but there's another one which is maybe the rtbj isn't functioning right even though we see it activated more when they do the false belief task than the false photo task maybe we'd see a difference if we looked at the pattern of responses in there right okay all right so um in this study um uh koster hale and sax did something that should be very familiar to you guys they took these two cases sort of a version of what you're suggesting yeah maybe it does entail it i think of it as an mvpa experiment but actually heather that's the thing to look at is does koster hale see a difference in overall magnitude and that i don't know anyway they did the mvpa version so you have subjects do those same tasks the accidental harm the intentional harm you split your data in half and you ask whether the rtpj represents the difference between accidental harm and intentional harm okay everybody get why this is a sensible thing to do okay so first you do that in um neurotypical subjects okay um and you find here is the court remember this is the original hacks b version the correlation within versus correlation between so this is a correlation within and that's correlation between so this is accidental the pattern in the rtpj for accidental harm to accidental harm across stories and intentional to intentional is higher than from accidental to intentional here i'm sort of skipping over the details hoping you guys remember this is this making sense okay so typical um classification with with correlations and you see that that was significant in this group of subjects uh of typical subjects okay so that's cool so there's information in the rtvj about whether um an action was intentional or accidental that's cool now we've learned something more than just it activates when you do these tasks we know something about what it represents but you might say that's so teeny ick yeah it was significant but really so what do you do in that case do it again absolutely you don't go find some fancier stat that gets your p level so you have to start no no no no you do it again rebecca and her lab members being good scientists did it again and they got the same thing excellent new bunch of subjects new bunch of stories replicate and generalize do it again and so yes indeed we can the spatial pattern contains information further you can then again in this bunch of neurotypical subjects look at the degree to which subjects rated the moral permissibility to be different in the intentional and accidental look at their behavioral ratings during the task that's here and you can see that's correlated with the degree of pattern information in the rtbj okay everybody get that so the more you pay attention to that distinction behaviorally when you're doing the task the more you think accidental harm is really much more okay than intentional harm the greater that difference in your behavioral ratings the greater the discrimination ability in your rtbj while you're doing it okay so that's a further length that that's where the action is just a correlation not causation but it's a nice one yeah okay so all of this is yeah yes it's in the rtc party technically as means there's no certain people between us always going to presentational horses and there is a communications so nurse of course ohio have a difference in the congregation just univariate overall magnitude accidental versus intentional in neurotypicals but not in asds aha so it's not just a pattern result yeah it's in the text it's not okay okay okay um okay but that's the interaction uh-huh okay but then the version of nava's question which i resonate to very much is if you just did the original kind of dufour thing of is the rtbj just as big and selective and is it in the same place just because it was a huge that's true okay so they don't have the data okay okay fair enough okay good point okay so that's why you know you ask that question it's a very good question i'm thinking yes they should have done that reason they didn't do it is they could have the big sample because they were using localizer data which they had from study after study after study every time you run a theory of mind test you run that localizer and then you just go back a few years and you've got 300 subjects right but they didn't run the moral reasoning task on hundreds of subjects and so they don't have the power to be able to see it yeah okay fair enough okay so back to this this is just showing that in the neurotypical subjects um there's pattern information in the rtpj about intentional versus accidental harm everybody got that and it's correlated with your behavioral reading of the moral permissibility okay so that's cool um so does rtpj distinguish between accidental and intentional harm in asds okay so these are the data i just showed you actually they did three experiments showing this here into endurotypicals and here's the asd data no difference at all okay so so what that means is even though asds have the same size and location and magnitude of response of the rtbj in the standard localizer task as typical subjects the key difference that's been found so far is that in neurotypical subjects our tpj holds information about the distinction between intentional and accidental harm in in asds it doesn't okay that's probably just one of a bunch of things that are going to be different this is as far as i know the only one that's been published that sees i see some squints are you not getting this or do you have a question about it yeah what happens in the evidence uh the fault there isn't an obvious discrimination that you can do see the the nice thing about the moral reasoning task is it's got these two outcomes these two um conditions accidental harm and intentional harm so that gives us a way to go in and do the pattern analysis question there so with false beliefs you'd have to think of some other dimension to look at and it and the experiments are not set up that way with with stimuli that are on either side of a of a dichotomy so that you can do the discrimination you could do representational similarity analysis actually come to think of it then you wouldn't have to have the whole dichotomy that's interesting okay heather has rebecca done that you're my informant here you could take okay so um shash is suggesting um what you know why not do this kind of ask this kind of question of this standard localizer experiment and i said well it wasn't set up with an obvious dichotomy um and and then i was thinking actually you could take all of the belief conditions and you could do rsa on those in the asds and on the typicals and ask if their patterns are different right i bet they've done that see what i mean anyway all right well i'll stop speculating and just ask she's right there two floors up can just ask all right uh anyway does everybody get this basic idea here so the thing the rtbg is there in the high functioning people with autism but it doesn't hold the same information okay all right um all right so where do we get with all of this we used all of this stuff on moral reasoning as a way to look at the rtbj in theory of mind we found that that people with asd put less weight on a person's beliefs when judging the moral permissibility of an action tms to the rtbj disrupts moral judgment pattern analysis shows that the tpj distinguishes between intentional and accidental harm in neurotypicals but not in people with asd okay so there's a nice little story developing here i'm sure it's not the whole deal |
MIT_913_The_Human_Brain_Spring_2019 | 16_Music.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: All right, OK, so let's start. We're talking about music today, which is fun and awesome. But first, let me give you a brief whirlwind reminder of what we did last time. We talked about hearing in general and speech in particular. And we started, as usual, with computational theory, thinking about what is the problem of audition and what is sound. It's the first step of that. And sound is pressure waves traveling through the air. And the cool thing about hearing is that we extract lots of information from this very, very simple signal of pressure waves arriving at the ear. We use it to recognize sounds, to localize sounds, to figure out what things are made of, and to understand events around us, and all kinds of things. And these problems are a major computational challenge. And in particular, they are ill-posed. That means that the available information doesn't give you a unique solution if you consider the computational problem narrowly. And that's true for separating sound sources. So if you have two sound sources at once, say, two people speaking or a person speaking and a lot of background noise, that's known as the cocktail party problem. Those sounds add on top of each other. And there's no way to pull them apart without bringing in other information, knowledge about the world or knowledge about the nature of voices or speaking or who's speaking. Or you need something else, or else it's ill-posed. That is not solvable just from the basic input. Another case of an ill-posed problem in audition is the case of reverb. So the sound that I'm making right now that's coming out my mouth is bouncing off the walls and is arriving at your ears over each little piece of sound that I make is arriving at different latencies after I say it as it travels different paths bouncing around the room. There's not too much reverb in here, so it's not that noticeable. But if we did this in a cathedral, you'd hear all these echoes. OK, and so that makes another ill-posed problem, because all of those different sounds are added on top of themselves diminished in volume over time. And you get the sum of all of those, and you have to pull it apart and figure out what that sound is. So both problems are solved by using knowledge of the real world. In the case of reverb, it's actual implicit knowledge that you all have that you didn't know you have about the physics of reverb. Because if we play you sounds with the wrong physics of reverb, you won't be able to deal with reverb. And that says it's implicit knowledge in your head, which is pretty cool, that you use to constrain the ill-posed problem. We talked about speech. Phonemes are sounds that distinguish two different words in a language, like make and bake. Those are two different sounds that make the difference between two words. Each possible speech sound is not a phoneme in every language of the world. Languages have some subset of the space of possible phonemes that distinguish words in their language. Phonemes include vowels that have these stacked harmonics in the spectrogram, and consonants which are the quick transitions in the vertical stripes in the spectrogram, leading into the harmonic stacks of vowels. We talked about the problem of talker variability, that a given phoneme or word sounds very different, looks very different in the spectrogram if spoken by two different people. And conversely, the same person speaking two different words looks very different in the spectrogram. And so that means that the identity of the speaker and the identity of the word being said are all mushed up together. And that means that if you want to recognize the voice independent of what's being said, or recognize the word independent of who's saying it, you have a big computational challenge, a classic invariance problem. Yeah, Ben. AUDIENCE: I don't mean to hold us up. I just wanted to make sure that I'm understanding. So the difference between consonants and vowels, are vowels just harmonic, like connective elements between consonants? And are consonants the percussive? Or are they actual-- like, I just didn't understand that. NANCY KANWISHER: Yeah, so in the spectrogram, those-- I didn't put that on the slide here-- but those horizontal red stripes in the slides that I showed you last time, those in the spectrogram, those are bands of energy at different frequencies that are sustained over a chunk of time. And those are typical of vowels, or singing or musical. And those harmonic sounds that have pitch. And so vowels have those sustained chunks that look like this in the spectrogram. And then there are these weird vertical stripes and transitions in and out of the vowels that are the consonants. AUDIENCE: Vowels are when you don't have [INAUDIBLE] spectrographs because air is just flowing through and you're filtering it somehow, like positioning your vocal tract in a certain way. And consonants are when you close off that air or restrict it in some way. So like S's and F's, you're not closing all the way off, but you're really constricting the vocal tract. And in a lot of other consonants, you're actually fully closing it. NANCY KANWISHER: OK, and then we talked a bit about the brain basis. And I pointed out that the neural anatomy of sound processing-- the subcortical neuroanatomy is much more complicated than the subcortical neuroanatomy of vision. In vision, you have one stop in the LGN, and then you go up to the cortex coming up from the retina. In audition, you have many stops between the cochlea, where you pick up sounds in the inner ear, and auditory cortex. Some of those stops are shown up here. And we didn't discuss them. So then we talked about primary auditory cortex. That's on the top of the temporal lobes, like right in there medially. You went in. And it has this tonotopic property, and that is a map of frequency space with this systematic high-low-high mapping of frequency space that you can see here-- high, low, high, like that. This is the top of the temporal lobe right there. And I pointed out that in animals and in one recent MRI study, the response properties of primary auditory cortex are well modeled by these fairly simple linear filters, known as spectrotemporal receptive fields or STRFs, shown here. So they're simple acoustic properties of a given band of frequencies rising or falling at different rates. So today, we're going to talk about music. And this is also an important moment in the course. Because up to now, we've been talking about functions that are mostly shared with animals. Speech is kind of on the cusp. I was going to make this point before speech. And that's actually muddy, because lots of animals are really good at speech perception. Chinchillas can distinguish ba from pa. Go figure, anyway. So they can perceive speech, but obviously they don't use it in the same way. But music is most definitely uniquely human. And so most of the things we'll be talking about from here on out are things about the human brain, in particular. And I think these are the coolest things in human cognitive neuroscience, because they tell us something about who we are as human beings. But they are also the hardest ones to study. Why is that? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: No animal models. And I'm always lamenting how-- about the shortcomings of each of the methods in human cognitive neuroscience. And we have lots of them, and they complement each other, but there's a whole host of things that none of those methods are good for. And so now we're really out on thin ice trying to understand these things with a weaker set of methods where we can't go back and validate them with animal models. And that's just life. That's what we do. So now let's back up for a second and consider, why am I allocating a whole lecture for such a fluffy, frivolous topic as music. And I would say, that's because it's not fluffy. It's actually fundamental. And it's fundamental in the sense that music is both uniquely human-- no other animal has anything remotely like human music-- and it's also universally human. That is, every human culture that's been studied has some kind of music. So music is really an essential part of what it means to be a human being. It's really at the core of humanity. And that alone makes it interesting. But further-- question? AUDIENCE: So, like, birdsong-- NANCY KANWISHER: Birdsong doesn't count. No, birdsong doesn't count in all kinds of ways. One, it doesn't have anywhere near the flexibility and variability. There are like narrow domains in which each male zebra finch makes a slightly different version of the call, but within an extremely narrow range. There's actually a brain imaging study that looks in brain imaging in songbirds and asks, do they have reward brain region responses to music. And the answer is, yes, in some cases. Like, do they enjoy it, right, is that part of-- and the answer is yes, but only when the significance of the birdsong is something that's relevant to them, like, there's a potential mate right here, then they like it. But they don't like it just for the sound. And that makes it very different from humans. And there are other differences as well. So it's further really important to us humans in a whole bunch of ways. One, we have been doing it for a very long time. And so, for example, the archaeological record shows these 40,000-year-old bone flutes that you can see from the structure of flute make particular sets of possible pitches. And further, most people who've thought about this have argued that singing probably goes back much farther than the bone flutes. After all, you don't have to make anything to do it. You can just sing. Some have even speculated that singing evolved before language. It's just speculation, but that's possible. In any case, it goes way back evolutionarily. It also arises early in development. So very young infants are extremely interested in music. They're sensitive to beat and melody, independent of pitch. We'll talk more about that a little bit. And finally, if you're not impressed with any of those arguments, people spend a lot of money on music. And if that's your index of importance, it's really important. Last year, $43 billion in sales. So I'd say it's not a frivolous topic. It's a fundamental topic. It's near the core of what it means to be a human being. And all of this raises a really obvious question. Why do we create and like music in the first place? What is it for? And this is a puzzle that people have thought about for at least centuries, probably millennia. And this includes all kinds of major thinkers, like Darwin, who said, "As neither the enjoyment nor the capacity of producing musical notes are faculties of the least direct use to man in reference to his ordinary habits of life, they must be ranked amongst the most mysterious with which he is endowed." So Darwin is implicitly assuming here that music is an evolved capacity. It's not something that we just learn and that cultures invent, if they feel like it or don't feel like it. But it's actually evolved and shaped by natural selection. And that means there must be some function that natural selection was acting on that was relevant to survival. So people have speculated about what that function might be. Those who think that music is an evolved function, including Darwin, he speculated that it's for sexual selection. And his writing is so beautiful, I won't paraphrase it. He says, "It appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavored to charm each other with musical notes and rhythm." So that's Darwin's speculation. It's just a speculation, but a lovely one. Also, note that he threw away this radical idea in here: "before acquiring the power to express their mutual love in articulate language." So he's speculating that music came before language. Again, all speculation, but interesting speculation. More recently, up the street, there's a bunch of people who've been thinking about this a lot. And Sam Mehr at Harvard has been arguing that the function of music and song, in particular, which he thinks is really the fundamental basic kind of native form of music, has an evolutionary role in managing parent-offspring conflict. And that's something that many evolutionary theorists have written about. The genetic interests of a parent and an offspring are highly overlapping, but not completely overlapping. The parent has other offspring to take care of besides this one right here. That one right there wants 100% of the parent's effort. Therein lies the conflict. And so Mehr has proposed that infant directed song arose in this kind of arms race between the somewhat competing interests of the parent and the offspring. And it manages this need the infant has to know the parent is there with the fact that the parent has other needs, so i guess idea they can sing while attending to other offspring, and on and on. So there's other kinds of speculations like this. But importantly, this is not the only kind of view. It's not necessarily the case that music is an evolved capacity. So others have argued that it's not. So Steve Pinker, also up the street, has argued that music is "auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of at least six of our mental faculties. If it vanished from our species, the rest of our lifestyle would be virtually unchanged." I think that might say a little more about Steve Pinker than it does about music. Nonetheless, it's a possible view. What he's saying is that music is not an evolutionary adaptation at all, but an alternate use of neural machinery that evolved for some other function. And then once you have this neural machinery, what the hell, you can invent cultural forms and use it to do other things like music. And the most obvious kind of neural machinery that you might co-opt for that function would be neural machinery for speech or neural machinery for language, which, as I argued briefly last time, are not the same thing. One is the auditory perception of speech sounds and the other is the understanding of linguistic meaning. So the nice thing about this is, finally after all this entertaining but speculative stuff, we have an empirical question. This is something we can ask empirically. Does music actually use the same machinery as speech or language, or does it not? Some of the rest of these speculations are very hard to test. So stay tuned. We'll get back to that shortly. But first, let's step back and think, OK, if music is an evolved capacity, it should be innate in some sense, at least genetically specified, right, because that's what evolution does is that natural selection acts on the genome to produce things that are genetically specified. And it should be present in all human societies, since the branching out of human societies is very recent in human evolution. So is it? Well, is music an innate? So, suppose we found specialized machinery in the brain and adults for music. And we showed really definitively, it's really, really, really specialized for music. Would that prove innateness? No, why not? AUDIENCE: Might have [INAUDIBLE].. NANCY KANWISHER: Bingo, thank you, very good. Yup, exactly. So this is something that many, many people are confused about, including colleagues of mine, most of the popular scientific press. Just because there's a specialized bit of brain that does x doesn't mean x is innate. It could be learned. And the clearest example of that is the visual word form area. Everybody get that? OK, so we've got to try something else. What if we find sensitivity to music, in some very music particular way, in newborns? Now that will get closer, but here's the problem. Fetuses can hear pretty well in the womb. And if the mom is singing or even if there's music in the ambient room, some of that sound gets into the womb. So that means that even if you show sensitivity to music, even in some very particular way, in a newborn, it's not a really tight argument that it wasn't, in part, learned. So this is a real challenge. It may just be impossible to answer. I'm not sure. I don't know how-- I don't know what method could actually answer this. But at the very least, it's really difficult and nobody's nailed it. So we can backtrack and ask the related, not quite as definitive question: "But OK, how early developing is it?" So often, developmental psychologists take this hedge. It's like, we can't exactly establish definitive innateness. But if things are really there very early and develop very fast, that's a suggestion that at least the system is designed to pick it up quickly. So even if there's a role for experience, there's some things that are picked up really fast and some things that aren't. And so how quickly is it picked up? So it turns out there's a bunch of studies that have looked at this. And young infants are in fact highly attuned to music. They're sensitive to pitch and to rhythm. And in one charming study, they took two to three-day-old infants who were sleeping, put EEG electrodes on them, and played them. They wanted to test beat induction, which is when you hear a rhythmic beat. You get trained to the beat. And you know when the next beat is. And that's true even if it's not just a single pulse. So they played these infants sounds like this. Oh, but the audio is not on. Now it's going to blast everyone. All right, hang on. AUDIENCE: It's playing. NANCY KANWISHER: Oh, it is playing? Turn up more? OK. Didn't want to deafen people. OK, here. AUDIENCE: It's going a little [INAUDIBLE].. Just turn it up so you can hear it. AUDIENCE: Go to HDMI, [INAUDIBLE] plugged in [INAUDIBLE]. NANCY KANWISHER: It's not, but that's supposed to work, right? It has worked before AUDIENCE: In there. AUDIENCE: Let's just check your system settings really quickly. So I can hear you from my system. NANCY KANWISHER: Yeah, it's weird. AUDIENCE: Wait, if I can hear you from my system, you're-- NANCY KANWISHER: Then, it is going out, yeah. AUDIENCE: Oh, somebody unplugged both. OK, let's try [INAUDIBLE]. NANCY KANWISHER: Aah. AUDIENCE: OK, try it one more time. NANCY KANWISHER: OK, here we go. [MUSIC PLAYING] Did you hear that glitch? Let me do it again. Take it back here. [MUSIC PLAYING] Everybody here the hiccup in the beat? So that's what these guys tested. They played rhythms like that to two to three-day-old infants. And-- [MUSIC PLAYING] Oh, now it's working. OK, great. OK, anyway, so here's what they find with their ERPs. This is the onset of that little hiccup, the time when that beat was supposed to happen and didn't, the missing beat right there. And this is an ERP response happening about 200 milliseconds later for that missing but expected beat. And let's see, this is a standard where the beat keeps going. Now you might say, well, of course they're different. One has a beat there and one doesn't. They're acoustically different. So they have a control condition which has a beat, but a different preceding context. So where that beat is not-- I'm sorry, where it has a missing beat, but that's expected by the previous context. So that's just evidence that even young infants have some sense of beat. So moving a little later, by five to six months, infants can recognize a familiar melody, even if it's shifted in pitch from the version that they learned. And that's really cool, because that means they use relative pitch, not absolute pitch. And that's something that adults do in music. We're very good at that. But no animal can do that. You can train animals to do various things like recognize a particular pair of sounds or even a few sounds, a few pitches. But if you transpose it, they don't recognize that. Yeah, Ben. AUDIENCE: Isn't it possible that we're just sensitive to rhythm and pitch rather than being sensitive to music itself? NANCY KANWISHER: Yes, hang on to that thought. It takes more work to show that it's music per se rather than just rhythm and pitch. We'd have to say what we meant by rhythm. If we load enough into the idea of rhythm, then it's like most of music right there. But we might say just even beat. How about that, right? And actually, already this study already is not just an even beat, because it has more context than that. That is, for example, the beats in this ERP infant study were not emphasized louder. The infants have to be able to pick out what the beat is from that complex sound. It's not automatically there in the acoustic signal as the louder onset sound. Five-month-old infants, if you play them a melody for one or two weeks, so they get really familiar with it and learn it, and then you don't play it again and you come back eight months later, they remember it. So music is really salient to infants. On the other hand, newborn infants' appreciation of music is not-- what is that not doing there? Oh, yeah, that's right. So they don't prefer consonance over dissonance, right. And they're insensitive to key. And they detect timing changes in rhythms, whether they are timing changes that are typical in the kind of music they've heard or typical in a more foreign kind of music. And so a really nice study that shows this is that in Western music, it's really common to have-- most Western music has isochronous beat. So you can see that over here. Here's an isochronous beat. Those are even, temporal intervals. And there's a whole note here and then half notes. And they're all multiples of each other, just wholes and halves, with the beat happening every four notes. Non-isochronous beat has this funny business where there's a whole note and a half note, making up just three-- what do you call those things-- they're not beats. What are they called? AUDIENCE: Three-beat notes. NANCY KANWISHER: Sorry, three notes, I guess. But it's not even notes, because it's whatever. I don't know what the terminology is. But anyway, this sound here followed by 4. This is non-isochronous rhythm. Those are really common in Balkan music where they do all kinds of crazy things, like 8/22 or something like that. I mean, like really, really crazy musical meters. They're awesome, I love them. But they are very other. Like, if you grew up in Western society when you first hear Balkan rhythms, it's very hard to copy them. But six-month-old infants get rhythms equally well if they're isochronous or non-isochronous. By 12 months, they can only get automatically, like immediately, perceive and appreciate rhythms that are familiar from their cultural exposure. That is isochronous if they're from a Western society or non-isochronous if they're from a Balkan country. Yeah? AUDIENCE: just what is getting a meter again? NANCY KANWISHER: Well, so there's a whole bunch of studies. I'm just summarizing here. That is, they're sensitive to violations by all kinds of measures of little whatever behavioral thing you can get out of a five-month-old, whether it's how much they're kicking their legs or how much-- often, it's how hard they're sucking on a pacifier is another measure. So you just see, can they detect changes in a stimulus or violations by any of those measures. Or you could do it with the ERPs. So brief exposure to a previously unfamiliar rhythm is enough for a 12-month-old to appreciate the relevant distinctions in that rhythm, but not for adults. So if you haven't heard non-isochronous Balkan rhythms until now and you try dancing to them, good luck to you. You can probably get it eventually, but it will take you a long time. So does this sound familiar? Perceptual narrowing, right? So we keep encountering this. We encountered this with face recognition, with same versus other races, same versus other species. You see it in face recognition. We encountered it with phoneme perception. The phonemes-- remember, newborn infants can distinguish all the phonemes of the world's languages, even those exotic clicks that I played last time from Southern African languages. And you guys can't distinguish all those clicks now. So that's perceptual narrowing. It makes sense, of course, because the reason we have perceptual narrowing is you want to have invariants. You want to appreciate the sameness of things across transformations. And if your speech culture or your music culture is telling you these two things, this variation, doesn't count, you want to throw away that difference and treat them as the same. And then once you do that, you can't make that discrimination anymore. So on this question we started with, is music an evolved capacity. If so, it should be innate. And we haven't really answered that question, maybe. But as I said, it's really hard, and maybe ultimately unanswerable. But certainly it's early developing. What about this other question? Is it present in all human societies? Well, I said before briefly that it is. Oh yeah, sorry, we have to back up and say, OK, to answer this question, we have to say what is music. To answer whether it's present in all societies. And this has been a real problem, because music is notoriously hard to define. And many people have made a point of stretching the definition of music, including the ridiculous and hilarious John Cage. So this is his 1960 TV appearance. [VIDEO PLAYBACK] - Over here, Mr. Cage has a tape recording machine, which will provide much of the-- will you touch the machine so we can know where it is-- which will provide much of the background. Also, he works with a stopwatch. The reason he does this is because these sounds are in no sense accidental in their sequence. They each must fall mathematically at a precise point. So he wants to watch as he works. He takes it seriously. I think it's interesting. If you are amused, you may laugh. If you like it, you may buy the recording. John Cage and "Water Walk." [EXPERIMENTAL MUSICAL SOUNDS] [END PLAYBACK] NANCY KANWISHER: Anyway, it goes on and on like that. I guess it was a little edgier in 1959 than it is now. But he's making a point. He's making a point is, what the hell is music. And he's saying, I can call this music if I want. And everybody's enjoying it. Anyway. So you can watch the YouTube video, if you want. It's quite entertaining. Despite this kind of nihilistic view that anything could count is music, there are some things we can say. First thing I'd say is, if you want to study music, one of your first things you run into is, oh, what's going to count. You run into this problem here. But actually, I think that doesn't need to be so paralyzing as it feels at first. You can just take the most canonical forms where all of your subjects will agree that this is music and this isn't. And then someday you can study the edge cases later, but you don't need to agonize about them in order to get off the ground and study it. Further, we can ask what is music cross-culturally. Oh, right, I keep forgetting my next point. And let me make another point is that music is not just about a set of acoustic properties. You may think of music as just an auditory thing, a solitary experience, because a lot of the time it's like that. But remember that that's a very recent cultural invention. And throughout most of human evolution, music has been a fundamentally social phenomenon, more like this, experienced in groups of people as a kind of deeply social, communicative, interactive kind of enterprise. Or even if not in a large group, music is very social in this sense here. There's a whole bunch of cool studies about the role of song in infants and how infants use song to glean information about their social environment. And the point is just music is extremely social. It's not just defined by its acoustic properties. But in addition, we can ask, OK, let's look across the cultures of the world and ask, are there universals of music? Is there anything in common across all the different kinds of music that people experience in different cultures? For example, are there always discrete pitches or always isochronous beats. I already showed you there aren't always isochronous beats. And this is nice because it's an empirical question. There's a really cool paper from a few years ago where they took recordings of music from all over the world, all those colored dots, and they asked, what are the properties that are present in most of those musics and how prevalent are they. And what they found is there's no single property of music that's present in all of those cultures, but there's many that are present in most, and there are a lot of regularities. So this is a huge table from their paper where they list many different possible universals. And what you see is the relevant column is this one here. And the white is the percent of those 304 cultures that they looked at that have that property in their music. So these top ones are very prevalent, just not quite universal, because there's a couple of cases that don't have it. So one of the most common ones is the idea that melodies are made from a limited set of discrete pitches, seven or fewer, and that those pitches are arranged in some kind of scale with unequal intervals between the notes. So that's as close to a universal of music as you can get, although you can see from that little teeny black snip that it's not quite perfectly universal. And the second thing is that most music has some kind of regular pulse, either an isochronous beat or even the non-isochronous ones have different subdivisions with different numbers of beats so that there's a systematic rhythmic pattern. So there's something kind of like melody and something kind of like rhythm in almost all the world's musics. They did find some pretty weird ones, one I can't resist playing for you. This is from Papua New Guinea. So as they say, the closest thing to an absolute universal was song containing discrete pitches, or regular rhythmic patterns, or both, which implied to almost the entire sample. However, music examples from Papua New Guinea contain combinations of friction blocks, swung slats, ribbon reeds, and moaning voices-- I don't know what those things are either, but I'll play them for you in a second-- that contained neither discrete pitches nor an isochronous beat. OK, here we go. [VIDEO PLAYBACK] [PAPUA NEW GUINEAN MUSIC] [END PLAYBACK] OK, pretty wild, huh? So maybe wilder, arguably, than John Cage. But anyway, so there are some like pretty remote edges to the concept of music. I mentioned before the case of consonance and dissonance and that infants don't prefer one over the other. In fact, this links to a really cool recent study from Josh McDermott's lab. And so the question he asked is, why do we like consonant sounds like this-- oops, [INAUDIBLE] play. Here we go. [RHYTHMIC SOUND] Kind of nice, right? But we're not so hot about this. [OFF TUNE SOUND] Right, everybody get that intuition? OK so what's up with that? So many people have hypothesized for a long time that that difference is based in biology, or even it's like a physical analog of it, beats and stuff like that. But actually, it's an empirical question. And so one way to ask that question is to go to a culture that's had minimal exposure to Western music, all of which really prefers consonance over dissonance. Yes, [? Carly? ?] AUDIENCE: Is consonants [INAUDIBLE] differentiated [INAUDIBLE]? NANCY KANWISHER: Oh, yeah, yeah. I'm sorry, totally different word-- consonance, C-E, has no relationship to consonants as distinguished from vowels. A consonant and a vowel, those are two different kinds of phonemes. Here, consonance is that difference between those two sounds I just played. And it has to do with the precise intervals of those harmonics in the harmonic stack. All right, so what McDermott and his co-workers did is to go to a Bolivian culture in the rainforest in a very remote location to test these people here, the Tsimane'. And the Tsimane' lack televisions and have very little access to recorded music and radio. Their village doesn't have electricity or tap water. You can't get there by road and you have to get there by canoe. So that's what McDermott and his team did. They went down there to visit the Tsimane'. And what they found, they played them consonant sounds and dissonant sounds, and with a translator, and spent a lot of time making sure that they really understood the difference between liking and not liking. And they tested their understanding of what it means to like something or not like it, and all kinds of other ways. And the upshot is, the Tsimane' do not have a preference for consonance over dissonance. So it's not a cultural universal. And that's consistent with the idea that it's not a preference in infants either. So this is something specific to Western music. So that's kind of introduction to some stuff about what music is and what its variability is and the fact that its presence is universal. And there are many very common properties across the world's musics, and it developed early. So let's ask, is music a separate capacity in the mind and brain. All right, so let's start with the classic way this has been asked for many decades, and that's to study patients with brain damage. And it turns out there is such a thing as amusia, the loss of music ability after brain damage. And so there are both sides of this. There are people who have impaired ability to recognize melodies without impaired speech perception. And there's the opposite-- people who have impaired speech recognition without impaired melody recognition. So that is, of course, a double dissociation, sort of, it's a little mucky in there. If you state the word simply like that, if you look in detail, there's some muck, as there often is. So let's look in a little more detail at these two cases, the most interesting ones who seem to have problems with auditory tunes but not with words or other familiar sounds. So here is a horizontal slice. This is an old study. So it's a CAT scan showing you something's up with the anterior temporal lobes in this patient. And this was true of these two classic patients, CN and GL. Both of them were very bad at recognizing melodies, even highly familiar melodies, happy birthday and stuff like that, they don't recognize. They mostly have intact rhythm perception. And this is a core question we'll come back to. It's a complicated non-resolved situation. But these guys had intact rhythm perception and relatively intact language and speech perception. However, upon further testing, it becomes clear that these guys have a more general problem with pitch perception, even if it's not in the context of music. So this is a question that I asked all of you guys to think about for in the opposite direction in your assignment for Sunday night. When I asked you whether those electrodes in the brains of epilepsy patients that are sensitive to speech prosody, to the intonation contour in speech, I asked you whether you thought they would also be sensitive to the intonation contour in melodies. And most of you said, yes, it's pitch, pitch contour, must be. Well, it's a perfectly reasonable speculation, but not necessarily. Maybe we have special pitch contour processing for speech and different pitch contour processing for music. It's possible. It's an empirical question. Was there a question back there a second? OK, so maybe this is about pitch for both speech and music, not music per se. And so there are more detailed studies of patients with congenital amusia. And just like the case with acquired prosopagnosia versus congenital prosopagnosia, whether you get it from brain damage as an adult or whether you just always had it your whole life, and nobody knows exactly why and there's no evidence of any brain damage, the same thing happens with a congenital amusia. So something like 4% of the population, they might say they're tone deaf. But just to tell you what that means, it can be really quite extreme. They can just completely fail to recognize familiar melodies that anyone else could recognize. They may be unable to detect really obvious wrong notes in a canonical melody. They're just really bad at all of this. And further, they don't have whopping obvious problems with speech perception. So at first, it was thought that speech perception was fine. But if you look closer, it looks like actually there is, even outside of music, there is a finer grained deficit in pitch contour perception that shows up even in speech. So what I mentioned before, so we can ask this in the case. This is sort of the reverse case of the ones you considered. Now we have people who have this problem with pitch contour perception in music. Are they going to have a problem also with pitch contour perception in speech? So that's what this study looked at. So they played sounds like this. And you have to listen carefully. There will be sentences spoken. And you have to see if they're identical or different. So listen carefully. [VIDEO PLAYBACK] - She looks like Ann. She looks like Ann? [END PLAYBACK] NANCY KANWISHER: How many people thought that was different? Good, you got it. So one is the statement and one is-- it's sort of a question. It's in a sort of British accent. It's a little harder to detect, but different intonation contour. So that's what the Tang, et al. Paper was talking about is that distinction. So we can then ask, that subtle distinction, are people with congenital amusia impaired at that. So if it's specific to music, they shouldn't be. But if it's any intonation contour, they should be. Yeah, I'll play the other ones. So they are in fact impaired. This is accuracy here, the controls are way up there, the amusics are down there. So they are impaired at this pitch contour perception thing, even in the context of music. I'm sorry, I said that wrong-- even in the context of speech. So it's not just about music. And in the controls, they have sounds like this, which are just tones. Got that? It's the same kind of thing, but not speech. And you see a similar deficit in the amusics compared to the controls. And then they have a nonsense speech version. [VIDEO PLAYBACK] - [INAUDIBLE] [END PLAYBACK] NANCY KANWISHER: Same deal-- the amusics are impaired compared to the controls. So that shows that the deficit for these guys is not specific to music per se but it seems to be a pitch contour problem in general that extends to speech. Yeah? AUDIENCE: Which of those-- NANCY KANWISHER: We'll get there, sort of. It would have been nice if the Tang et. al. paper had included some musical contour stuff. They didn't, but I'll show you some of our data shortly that gets close to this. OK, so all of that suggests that this amusia is really more about pitch than speech. I'm sorry, what's the matter with me. It's really more about pitch than music. But the reading that I assigned for today is a very new twist in this evolving story. So this used to be a nice, clean lecture with a simple conclusion. And now all of a sudden, I ran across that paper. It's like, wow, OK, that might not be quite the case. So what did you guys get from the reading? In what way does that slightly complicate the story here? Yeah, [INAUDIBLE]? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Yeah, what they found is that amusics, not all of them, also have problems with rhythm. And that is inconsistent with the idea that amusia is just about pitch, whether in speech or music. And that says, OK, many amusics also have problems with rhythm. Yeah? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: So there's a standard battery that people use that asks-- Dana, help me. What does the standard battery ask people? AUDIENCE: There's a lot stuff, tests, things like listening to like a clip of a symphony and having to decide whether [INAUDIBLE] or they're too slow. NANCY KANWISHER: Kinds of things that people without musical training answer fine, although there's quite a range. I'm at the way bottom end of Dana's scale when she gives these. AUDIENCE: That rhythm falls apart, might not be able to tell the difference. NANCY KANWISHER: Just that this prior evidence on the stuff I showed and a whole bunch of other studies seem to suggest that amusia, both in acquired brain damage and congenital amusia, seem to be really when you drill down more of a problem with pitch per se, even pitch in speech. And so then if it's about pitch, why would it also go along with rhythm? And so when it goes along with rhythm, that starts to sound more like this is something about music. It gums up the story. Talia? AUDIENCE: So I don't really know if this could be a compound, but when it comes to natural speech when you have some kind of intonation, like pitch differences when you emphasize, like especially in terms of a question, aren't there also some kind of rhythmic differences as well? NANCY KANWISHER: Yeah. AUDIENCE: So how do you separate the two out? NANCY KANWISHER: You just have to do a lot of work to try to separate those out. And so the paper I signed to you guys did some of that work. There's still room to quibble, but they did. There was experiment two, and they tried to deal with exactly that kind of thing of saying, OK, let's try to make sure that-- well, actually the controls that they were doing is slightly different. They were to make sure that the beat task didn't require pitch. So it's very, very tricky to pull these things apart, which is-- AUDIENCE: Yes, so like the beat task doesn't make sense, but I was just, like, in the verb first one, even from the paper that was assignment Sunday. I don't know, so you're saying that it's totally possible to separate out rhythmic differences from when you're just changing pitch. NANCY KANWISHER: It's really, really difficult. It's really difficult. Dana's trying to do experiments to do this right now. And she's invented some delightful and crazy stimuli that try to have one and not the other. It's very tricky. You can have rhythm without pitch change. That you can totally do. It's really hard or impossible to have a melodic contour without some beat or other. We have some crazy stimuli that sort of do that, but they're pretty crazy. So anyway, these are very tricky things to pull apart. And this is all right at the cutting edge. These things have not been cleanly separated. I'm running out of time. So do you have a quick question? OK, sorry about that. So conclusions from the patient literature, they're suggestive evidence for specialization for music, but no really clear disassociations. Music deficits are frequently but not always associated with just more general pitch deficits. And all of this is complicated because there's lots of possible components of music, right. When there's pitch deficits, is it pitch or relative pitch, interval, key, melody, beat, meter? All of these things are different facets of music. And so it's really not resolved exactly what's going on here. It's kind of encouraging that there's a space in there, but not resolved. So let's go on to functional MRI. And we're going to run out of time. So let me just take a moment to figure out how I'm going to do this. What the hell am I going to do here? Well, I hate to-- OK, you guys are going to tell me at 12:05. Yeah, OK. Maybe we can get all through this. So here's a really charming study from a few years ago that tried to ask whether there are systematic brain regions that are engaged in processing music. And they used a really fun perceptual illusion that you're going to hear. I'm going to play a speech clip. And it's part of it is going to be repeated many times. And just listen to it and think about what it sounds like. [VIDEO PLAYBACK] - For it had never been his good luck to own and eat one. There was a cold drizzle of rain. The atmosphere was murky. There was a cold drizzle. There was a cold drizzle. There was a cold drizzle. There was a cold drizzle. There was a cold drizzle. There was a cold drizzle. [END PLAYBACK] NANCY KANWISHER: What happened? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Yeah? What happened? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: You start to hear a melody. And you didn't hear the melody the first time he said it. It was just normal speech, right. Speech has this kind of intonation contour. And he's speaking with an intonation contour. But then somehow when you keep hearing it, it turns into a melody. So it turns out that doesn't work for all speech clips. In fact, it's really hard to find speech clips for which it works. But there are some. But everyone has that experience, or most people do. And that gives us a really nice lever, because we can take that same acoustic sound when you hear it as speech and when you hear it as melody and we can ask, are there brain regions that respond differentially. It's sort of analogous to upright versus inverted faces. Well, it's even better. It's the exact same sound clip that's construed one way at first and another way afterwards. Everybody get that? So that's what these guys did. They used a standard block design. They just listened to those sounds and they just looked in the brain to see what bits respond more after the sound starts getting perceived as music than before when it was being heard as speech. And they got a bunch of blobs in the brain. It's a bit of a mess, but they got some stuff. And so that's fun. But it's also ambiguous. We still don't know if this is about some kind of pitch processing, which becomes more salient-- you hear it as abstract pitch-- or whether it's really about melodic contour or what. So that's a cool study, but I think it doesn't really nail what's going on. So another angle at this is to ask whether music recruits neural machinery for language. So let me say why this has been such a pervasive question in the field. So there's a lot of people who have pointed out for 30 years, or probably more, there are many deep commonalities between language and music. So they're both distinctively or uniquely human. They're natively auditory. That is, we can read language, but that's very recent. Really, language is all about hearing, evolutionarily. They unfold over time. And they have complex hierarchical structure. So you can parse a sentence in various ways and there are all kinds of people who've come up with ways to have hierarchical parsings of pieces of music as well. So there's a lot of deep connections between language and music. And so many people have hypothesized that they use common brain machinery. And there, in fact, many reports from neuroimaging that argue that in fact they do use common machinery. Like, we found overlapping activation in Broca's area for people listening to music and speech. However, both studies are all group analyses. I forget if I've gone on my tirade in here about group analyses. Have I done the group analysis tirade in here? You'll get more of it later. I'll do a brief version now, and you'll get more later. Here's the problem-- group analysis is you scan 12 subjects. You align their brains as best you can. And you do an analysis that goes across them. And you find some blob, say, here, yeah, be there, for listening to sentences versus listening to non-word strings. OK, that's a standard finding. Then you do it again for listening to melodies versus listening to scrambled melodies. And you find the blob overlaps. And then you say, hey, common neural machinery for sentence understanding and for music perception. Now that's an interesting question to ask. It's close to the right way to do it. but there's a fundamental problem. And that is, you can find an overlap in a group analysis, even if no single subject shows that overlap at all. Why? Because those regions vary in their exact location. And if you mush across a whole bunch of individuals, you're essentially blurring your activation pattern. And so all of the prior studies, until a few years ago, had been group analyses and they found overlap. And who the hell knows if there was actually overlapping activation within individual subjects, which there would have to be if it's common machinery. Or if they're just nearby and you muck them up with a group analysis and they look like they're on top of each other. If you didn't quite get that, we'll be coming back to that point. For now, all you need to know is many people ask this question and the methods were close but problematic. But luckily, however, Ev Fedorenko did this experiment right a few years ago. So here's Ev and here's what she did, she functionally identified language regions in each subject individually. And we'll talk more about exactly how you do that. You listen to sentences versus non-word strings. You find a systematic set of brain regions that you can identify in each individual that look like this. Here is in three subjects. Those red bits are the bits that respond more when you listen to a sentence versus listen to non-word strings or read sentences versus non-word strings. Then what she could do is she said, now that I found those exact regions in each subject, I can ask of those exact regions, how do they respond to music versus scrambled music. So she played stuff like this. [MUSIC PLAYING] OK, so nice canonical and nothing crazy, weird. We're not going with the New Guinean music and asking edgy questions. We're just saying something everybody agrees that's music, versus you scramble it and it sounds like this. [MUSIC PLAYING] OK, it's actually the same notes. I know, I know. A lot of people that go, that's cool, that's really edgy. Yeah, it is. But to most people, it's not canonical music. And so what Ev found is that none of those language regions responded more to the intact than scrambled music. So language regions are not interested in music. We'll talk more about that next week or the week after. Then she did the opposite. She identified brain regions here in a group analysis just to show you where they are, anterior in the temporal lobes, that respond more to intact than scrambled music. She identified those in each subject and measured the response of those regions to language, sentences and non-word strings. And each of those regions respond exactly the same to sentences and non-word strings. So basically, the language regions are not interested in music, and the music regions are not interested in language. And therein, we have a-- AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Thank you, exactly. So music is not using machinery for language. That was one of the hypotheses we started with. And it was not. So that's true, at least for high-level language processing, that computes the meaning of a sentence. But what about speech perception? Remember, last time I made the distinction between the sounds, like ba and pa, which have a whole set of computational challenges, just perceiving those sounds, which is quite different than knowing the meaning of a sentence. So what about speech perception or, in fact, any other aspect of hearing? So what I'm going to try to do is briefly tell you about one of our experiments. I'm sorry, I try not to turn this whole course into stuff we've done in my lab, but it's one of my favorite ever. And it's a cool, different way to go at this question from the other MRI experiments we've talked about before. So the background is, OK, let's step back. What's the overall organization of auditory cortex? And when we did this experiment five or six years ago, not a whole lot was known. Basically, everybody agrees. Whoops, I put the wrong slide in here. Everybody agrees that primary auditory cortex is right there with that high-low-high frequency thing we talked about from there. But from there on out, in the last couple of years, there's an agreement about speech selective cortex that I showed you briefly last time and other people have seen that. But there's lots of hypotheses and no agreement with anything else and no real evidence for really music-selective cortex. But there's a problem with all the prior work where you sit around and make a hypothesis and say, oh, let's see, are we going to get a higher response to, say, intact versus scrambled music, or faces versus objects, or whatever. All of those are scientists making up hypotheses, and then testing them. And there's nothing wrong with that. That's what scientists are supposed to do-- invent hypotheses, and then make good designs and go test them. But the problem with that is, we can only discover things that we can think to test. What if deep facts about mind and brain are things that nobody would think up in the first place? And so that's where we can get real power from what are known as data-driven studies, where you collect a boatload of data and then use some fancy math and say, tell me what the structure is in this data. Not, is this hypothesis that I love true in these data. And I'll do anything to pull it out if I can. See it in there, find evidence for it in there. But yeah, exactly. But if we collect a whole bunch of data and do some math and see what the structure is, what do we see? So that's what we did in this study. I'm going to speed up to try to give you the gist here. So "we" is Sam Norman-Haignere here and Josh McDermott. [SOUND RECORDING EXPERIMENT PLAYING] And so we scanned people while they were hearing stuff like this. We first collected the 165 categories of sounds that people hear most commonly. This is classic cocktail party effect you guys are doing. You have to separate me speaking from all this crazy, weird, changing background. And so anyway, we scan people listening to these sounds, which broadly sample auditory experience. And so we collected sounds people hear most often and that they can recognize from a two-second clip. OK, enough already. [CELLPHONE RINGING] Oh, yeah, just to wake everyone up. So we scan them listening to those 165 sounds, broad sample of auditory experience. Then, from each voxel in the brain, we measure the exact magnitude of response of that voxel to each of the 165 sounds and we get a vector like this. Everybody with me? That's one voxel right there, another voxel, another voxel. We do this in all of kind of greater, suburban, auditory cortex. That is not just primary cortex, but all this stuff around it that might even remotely, that responds in any systematic way to auditory stimuli. They grabbed the whole damn thing. So you do that in 10 subjects. You have a big matrix like this-- 1,000 voxels in each subject, 11,000 voxels across the top, 165 sounds. That's our data. So each column is the response of one voxel in one person's brain to each of the 165 sounds. Everybody got it? Now, we have this lovely matrix, which is basically all the data we care about from this whole experiment. Then, we throw away all the labels. Poof. It's just a matrix. And then we do some math, which essentially says, let's boil down the structure in this matrix and discover its fundamental components. That math happens to be a variant of independent component analysis, if that means anything to you. If it doesn't, don't worry about it. The gist is, we're doing math to say what's the structure in here. And we're doing it without any labels. So this analysis doesn't even know where the voxels are or which of your 10 subjects that voxel came from. It doesn't know which sound is which. And so it's very hypothesis neutral. It's a way to say, show me structure with almost no kind of prior biases. Just show me the structure. So everybody get how that's kind of a totally different thing to do from everything we've talked about so far? So that's what we did. I'm going to skip the math and the modeling assumption. It's not really that complicated, but I think I'm going to run out of time, so very hypothesis neutral. And what we find is six components account for most of the replicable variance in that whole matrix. I'll tell you what a component is in a second. Did you have a question? AUDIENCE: Is it just like with ICA, but [INAUDIBLE] PCA [INAUDIBLE]? NANCY KANWISHER: With PCA, you assume orthogonal axes. With ICA, you don't assume orthogonal axes. And so it's very, very similar to PCA. And it starts out as PCA and then it does some more rigmarole. Yeah, it's the same idea. Like basically, tell me the main dimensions of variation. Yeah? AUDIENCE: And are these matrices sparse and [INAUDIBLE]?? NANCY KANWISHER: Yes, they are sparse. And that is one of the assumptions you use. There isn't only one way to factorize a matrix. It's an ill-posed problem. So you need to make some assumptions. And that's one of the ones we made, but you can test them. So what we find is six components account for most of the data. And four of those reflected acoustic properties of the stimuli. One was high for all the sounds with lots of low frequencies. Another was high for all the sounds with high frequencies. What is that? Sorry, speak up? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: They're sensitive to frequency, but where is that in the brain that you've already heard about? AUDIENCE: Primary-- NANCY KANWISHER: Primary auditory cortex as a tonotopic map. So this is awesome. Because if you go invent some crazy math and you apply it to your data and you discover something you know to be true, that's very reassuring. The math isn't just inventing crazy stuff. It's discovering stuff we already know to be true. That's known in more biological parts of the field as a positive control. Invent a new method, make sure it can discover the stuff you know to be true. So check, check, OK. But then it discovered some other stuff. And I'm just going to tell you about two of them. So here's one. So I was just loose about what a component is. A component is a magnitude of response for each of the 165 sounds and a separate distribution in the brain, which I'll show you in a moment. So here's one of those components. And we've taken the 165 sounds and added basic category labels on them. We put them on Mechanical Turk and people told us which category they belong to. So that enables us to look at this mysterious thing and average within a category. So this is its component. And if you look at it, you see that it's really high for English speech and foreign speech that our subjects don't understand. And then, oh, what's, that intermediate Thing Oh, that's music with vocals. It has a kind of speech. And way down here-- that's non speech vocalizations, stuff like laughing and crying and sighing. So there's a voice but no speech content. So that's a speech component. And as I mentioned, this had been seen before in the last few years. So it wasn't completely new. But what's cool about this is just emerged spontaneously from this very broad screen. We didn't go and say, hey, can we find a speech selective region of cortex, if we try really hard. Oh, yeah, we validate our hypothesis. This is like, let's sample auditory experience-- and wow, there it is. Yeah? AUDIENCE: I mean, you assigned [INAUDIBLE].. NANCY KANWISHER: We put them on Turk and had people say what category they fit into. Yeah? AUDIENCE: [INAUDIBLE]. Categorizing by speech is a very good way [INAUDIBLE] better way than [INAUDIBLE]. NANCY KANWISHER: Absolutely, absolutely. This is a first pass. And one hopes to go deeper and deeper. If we could separate different aspects of speech, consonants and vowels, fricatives, whatever, there could be much more to be done. Yeah, I got to-- oh, boy, OK. And when do I have to give them the quiz? It's shortish. They don't need a full 10 minutes. What is it? Seven questions? AUDIENCE: Eight. NANCY KANWISHER: Eight-- eight minutes? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: OK, make me stop definitively at 12:18. OK, so that's cool. It's not exactly new, but it's a really nice way to rediscover things that we thought to be true. All right, then there's component 6 that popped out. What is component 6? Well, if we average within a category instrumental music and music with vocals, and everything else is really low. We didn't go looking for this. Boom-- music selectivity. That's pretty amazing. Never really been seen before. People have looked and they've made some kind of sort of smoke and mirrors, like, not really. This is the first time it was seen and it just popped out of the data. And that says that it's not just something you can find if you try really hard and go fishing for it. It's actually a significant part of the variance in this whole response. I'm going to skip everything except clarification questions now, because I'm-- go ahead. AUDIENCE: Did these voxels correspond to the music [INAUDIBLE]? NANCY KANWISHER: Sort of, it's complicated. Sorry, it's a long answer. So this really looks like it's music. And so now, I was vague about what a component is, but it's both that response profile and it's a set of weights in the brain. So if you project this one back in the brain, you get this band of speech selective cortex right below primary auditory cortex, like that. And if you project the music stuff back in the brain, you get a patch. This is sort of an answer to your question. You get a patch up in front of primary auditory cortex and a patch behind. So here we have a double dissociation of speech selectivity and music selectivity in the brain, OK? So music doesn't just use mechanisms for speech as many people have proposed. It's not true, right. So when you see dramatic data like this, a natural reaction is to say, like, really, get out, come on. Like, music specificity, like what? So very briefly, Dana has just replicated this in a new sample of subjects. It does not matter if those subjects have musical training, like students from Berklee School who spend like six hours a day practicing, versus people who have essentially zero music lessons ever in their life, you get those components in both groups, maybe slightly stronger in the trained musicians. We're not quite sure yet. But in any case, it is totally present in people with zero musical training. That doesn't mean it's innate, because people without musical training have musical experience but no explicit training. Skip all of this. Here is her replication. Boom, boom. It's there with and without training. I'm going to skip all this. You can read it on the slides, if I lost you in here, because I want to show you something else. That music selectivity was not evident if you just do a direct contrast in the same data. Take all the music conditions, all the non-music conditions, you get a blurry mess. It's not strong. You have to do the math to siphon it off. And that's OK. But I like to see things in the raw data. And so probably what that means is that the music is overlapping with other things in the brain. And so the direct contrast doesn't work well, the math can pull them apart. But wouldn't it be nice to see them separately? And so we've been doing intracranial recordings from patients with electrodes in their brain. And I'll just show you a few very cool responses. So this is a single electrode in a single patient. These are the 165 sounds, same ones. This is the time course. And this is a speech selective electrode. It responds to native and foreign music. Those are the two green ones-- I'm sorry, native and foreign speech. And it responds to music with vocals in pink. Everybody see how that's a speech selective electrode? So there's loads of those. But we also found these. Here is a single electrode. Look, each row is a single stimulus. Here's a histogram of responses to all the music with vocals, music without vocals, much stronger than to anything else. You might be saying, well, what about those things. Let's look at what those things are. Oh, even the violations aren't really violations. Whistling, humming, computer jingle, ringtone-- those are sort of musicy. So that is an extremely music-selective individual electrode in a single subject's brain. No fancy math that might have invented it somehow. It's just there right in the raw data. Further, and here's the time course, you can see the time course of music with instruments, music with vocals, everything else. Really selective. So this is the strongest evidence yet for music specificity in the human brain. But there's one more cool thing that came out of this analysis. And that is we found some electrodes that are not just selected for music, but selected for vocal music, selected for song. And that's really amazing. Because as I started off at the beginning, many people have said that song is a kind of native form of music. The first one to evolve and all that kind of stuff. And so we did all the controls. It's not the low-level stuff. And there's lots of open questions. We started with this puzzle of how did music evolve, if it did evolve. And we made a little bit of progress. It doesn't share music machinery with speech and language. If it's auditory cheesecake, as Pinker said, it's auditory cheesecake that not only uses machinery that evolved for something else, but changes it throughout development and makes it very selective. These guys speculated that song is special. Maybe it is. And sexual selection, who knows? We have no data. |
MIT_913_The_Human_Brain_Spring_2019 | 24_Attention_and_Awareness.txt | NANCY KANWISHER: So we won't get through the whole attention lecture. I saw this coming. I just felt like that last bit was important enough. We'll cut as needed. But I do want to talk about attention. And let me start by sharing with you some of the key ideas about what attention is. OK, so to get into the mode of thinking about this, let's consider the following question. How do you feel about people driving while talking on their cell phones? Smart, not smart? And while you're thinking about that, also consider the case-- like let's suppose that you have had a hands-free setup, so you don't have to be looking down on your cell phone or typing away or holding your cell phone. Maybe that's OK. What do you think? Is there no problem with driving while talking on your cell phone, provided you're not looking down and pecking away at it? Yes? What do you think? I want to know what you really think. I know what all the PR says. Is it fine? Yes? AUDIENCE: I would say it depends on whether you're driving-- Like if the road is clear and it's like your usual path, like driving is pretty automatic at that point. But it it's like high traffic or a new area, then I think it would have a significant affects, so you shouldn't. NANCY KANWISHER: Totally. My rule is, if I am talking on my cell phone, certainly, if I'm turning left, I put the damn cell phone on my lap. And I tell whoever it is, I can't talk to you and turn left at the same time. All right, turning left is the hardest thing we do. Self-driving cars can't turn left. Have I mentioned this? How does a self-driving car turn left? It turns right three times. Why is that? Because turning left is social cognition. It's like, do they see me? Do they know I'm here? Are they looking at my indicator? Do they realize I'm going to turn left? This is hard. This is social cognition. Nobody's mastered that yet. Anyway, it's hard for us too. Anyway, the real question here is not for me to lecture you about driving but to think about, why is that a problem? Why can't you talk to a person and pay attention to driving? What is the big deal, right? And so the intuition is we have some kind of limited processing ability. OK, we're not like super supercomputers who can just do millions of things at once. We have some kind of limit in how much stuff we can handle, OK? And I think everybody will have this intuition. You can't think about lots of different things at once. I like to think of this as the toaster model of cognition. That is, you plug in the toaster, and the lights dim, right? So if everything's on the same circuit, there's some kind of unitary thing, resource that's limited. And if some more of it goes here, less if it goes here. OK, so obviously we don't have simple circuits like that in our brain. And the relevant scarce commodity isn't just electricity. It's not electrons. It's some other kind of thing we don't totally understand. But there's still some sense in which of our mental processes are on the same circuit. OK? I'm assuming everybody is sharing this intuition here. So think about this. How many people feel like you can listen to music and read difficult stuff at the same time? Raise your-- I'm curious. How many people feel like they can do that? Only a few. That's so interesting. Sort of. Yeah, I can't do it at all, like at all, like even background music that I don't even care that much about. Someday-- I mean, I think these are really stunning individual differences. I don't know if there's any good work on it. I haven't seen it. But I think it's really interesting. Some people can. Some people can't. I don't know what's up with that. How about recognizing faces and scenes at the same time? Can you do that? Yeah? How many people feel like you can recognize a face and a scene at the same time? All right. In fact, Michael Cohen, who gave that lecture on brain-computer interfaces a month ago or so, showed some beautiful work that, actually, you are better at recognizing a face and a scene presented simultaneously than two faces or two scenes. Can you think why that might be? Yeah, Isabelle? AUDIENCE: Context. NANCY KANWISHER: Say more. AUDIENCE: If you see somebody, even a familiar face, you can either-- it helps identify them as, oh, I know that person. Then, obviously, they're there. NANCY KANWISHER: Right. So they can go together into a thing, maybe chunk it as a thing or something. That's a good answer. It wasn't the one I was fishing for. Yeah, Jimmy? AUDIENCE: If people and places are separate things that your brain calculates, if it's two faces, you have to calculate this thing with a face number one before going to face number two. Whereas if it's [INAUDIBLE] NANCY KANWISHER: Exactly. Exactly. And you do, right? You have an FFA and a PPA. And they're friends, right? So what Michael showed is not just that you can recognize a face and hold it in working memory, a face and a scene better than two faces or two scenes, but the degree to which you have that cost of doing two things in one category rather than spreading over two categories is linearly proportional to how different the activation patterns are for those categories in the ventral visual pathway. So faces and scenes are totally different from each other. And so you get a big benefit for separating your two items across faces and scenes. Other things that are more similar in their pattern of response in the ventral pathway, like faces and bodies, you get a smaller benefit. OK? So it's consistent with this idea that, to the degree that we have separate processors, you can do, to some extent, processing in parallel, OK? It doesn't explain why not everybody can read and listen to music at the same time. Because, as I argued a few weeks ago, these things don't overlap at all. So there are many mysteries. But there's some intuition there. OK, to get a little more intuition about limited processing capacity, can you guys see? When I was sitting over there last time, I couldn't see the screen at all. Can you guys see it? It's OK? All right. Yeah? AUDIENCE: Do you think the problem is [INAUDIBLE] NANCY KANWISHER: Not for me. Instrumental music, no lyrics still a problem. So I don't know. I mean, I'm sure there's a literature on this. I just don't happen to know it. I'm just trying to share the intuition. So I don't-- it's not the only thing for me. OK, so what I'm going to do-- this is a super low-tech demo. I'm going to show you an array of colored letters. Grab a piece of paper or something where you can write down a few letters. And I'm just going to flash up one array very quickly. And I want you to write down as many of the blue letters as you can. OK, ready? There's only a few of them. So you can. OK, ready? Here we go. OK, write them down. Last year, everyone got all of them. So I tried to go a little faster this time. OK. OK, we're going to do it again. Ready? There's going to be another display, and you're going to write down all the blue letters. Everyone ready? Here we go. OK, just wanted to share the intuition. The probability-- did anyone miss the N here? OK, how many people got the N? Nobody's going to admit it. I wanted a few people to miss the N, right? I really did zip through. I probably cheated and made this one go faster. You missed it? OK, thank you. I'm sure a few others did. OK, anyway, the point is that, if you do this properly in a lab, the probability of detecting it here is much less than there, right? So what does this show? It shows that we have limited capacity. We can't process all of those blue letters in parallel, right? When there's a single one, you get it just fine. But when there's a bunch, you don't necessarily get it, right? OK. It also shows that we have ability to select the information, right? So you weren't bothered by the red letters. If I'd asked you right after about the red letters, you probably couldn't report any of them at all, right? It's a whole field that studies that. We won't get into that. OK. So in fact, the probability of reporting the N in these two displays is independent of the number of red letters. They just don't matter. They are not entering into that limitation, whatever it is. The limitation is only for the ones you're paying attention to, the blue ones, OK? OK, so why is our mental capacity limited? OK? Nobody really knows an answer. And I actually think this is a really unsatisfying part of attention research because it sort of seems obvious. But I think it's not. So sometime in the next 10 or 20 years, some smart computational person will analyze this well and get a more satisfying answer. But right now, here's where we are. The standard story is, well, we have only so many neurons and so much capacity. And you can't do everything at once, right? So we can't process everything in the whole visual field at once. That's the standard story. But the reason I find this unsatisfying is like, why the hell not, right? We have all this parallel stuff or the whole visual field, at least in the first few stages of processing. And we have quite a degree of parallelism after that. So I think that's the standard story. But I'm just marking that I don't find it very satisfying. Yeah? A second answer, which I think is also not totally satisfying but also a little bit right at least, is that, typically, you're only going to do one action. Right? You're going around in the world. Let's forget colored letters on a display in a lab. Let's think about a person walking around in a busy city street doing something, right? There's typically only one or a very small number of things you're going to do next, like walk down the sidewalk and not bump into people or pick up this object, right? You don't need to act towards all the things in your world. And in fact, if you had to, so given that you're only going to do one thing, why distract the whole rest of your motor-planning system by feeding it all this information that's is just going to clutter it with garbage? Why not give it just the information it needs? OK, so that's the standard story. It's very squishy and vague but hopefully intuitive. And there's some nice, compelling examples that illustrate this. So there's a fish called the pike fish that preys on smaller fish called sticklebacks. OK? And if you stick a pike fish in a tank with sticklebacks, it will catch the first stickleback faster if there's only one in the tank with it than if there are 10 in the tank with it. OK? You might think, you've got 10 to choose from. You'll get a fish faster. But 10 is more distracting, right? And so maybe our action-planning systems also prefer the single stickleback. Again, there's no computational precision. I'm just sharing intuitions with you. OK, so let me give you some more evidence that there really are significant capacity limits in perception and that, in fact, there's a lot of stuff right in front of us that comes right onto our retina that we don't see. OK? So I'm going to show you an example. I'm going to show you a picture for just a few seconds. And I want you to just look at it. And then I'm going to ask you some questions about it. OK? Here we go. OK, so how rich a percept did you get? Did you see lots of stuff and get lots of detail? Or do you just feel like you just have the vaguest sense of a few buildings, a street, that's it? How many feel like they got a lot of pretty good detail? No one does. Or everyone's bored. I don't know which. AUDIENCE: What do you mean good detail? NANCY KANWISHER: The colors of buildings, the presence of objects, details on the architectural styles, where an awning was, that kind of thing. AUDIENCE: I feel like I got [INAUDIBLE] NANCY KANWISHER: Well, most people looking at this feel like-- I mean, who knows what it means? I mean, we're just sharing intuitions here. But most feel like, that was pretty rich. I have a sense of it. I have a sense of maybe what country that's in and what kind of stuff is there and what kind the style is. You get a feel for the place, all that kind of thing. Maybe not every damn detail, but lots, right? So the general intuition most people have is a fair amount of detail. OK? Well, let's look at that some more. This is actually a very, very heated topic right now. Me and Michael Cohen published a paper a couple of years ago called The Bandwidth of Perception, which is an effort to grapple with this question of how much information is there in your current percept right now. And there are many different perspectives on this. Let's find out a little more with a further demo. So what I'm going to do is I'm going to show you that picture again. And it's going to flash on a number of times. And each time it flashes on, there might be something that's different. So take notes on any differences you might detect. OK? OK. OK, what things changed? Yeah? AUDIENCE: There was a woman walking down the sidewalk. NANCY KANWISHER: And? AUDIENCE: That's all I got. NANCY KANWISHER: OK, but she changed. What changed about her? AUDIENCE: Other than her clothes? NANCY KANWISHER: Yeah. Over the successive presentations, was she there sometimes and not others? Did she change? Yeah, she appeared or disappeared. OK, good. What else, Isabel? AUDIENCE: The awnings changed. The buildings changed colors. NANCY KANWISHER: Very good. Very good. How many people saw an awning change? Maybe half of you. OK. What else changed? AUDIENCE: The car. AUDIENCE: The model of the car. NANCY KANWISHER: The model of the car. Very good. Yeah? AUDIENCE: I feel like the shops changed. But I don't know when or how. NANCY KANWISHER: OK. OK. You want to see how they changed? OK, here's how it ended. Here's how it started. Are you surprised how much? Raise your hand if you're surprised how much changed. Yeah, a lot, right? OK, so what does that tell us? It tells us that either the sense we have that we really saw a lot of what's going on there was wrong. We didn't see as much as we thought. Because if we saw as much as we thought, we should notice massive changes like that. If you didn't see the awning, look over there. I mean, how could-- look at the color changes. Pretty major, right? Or we perceive all that stuff in the instant, and it just goes poof by the time the next one comes along. That's one of the things the field is finding about those things are very hard to tell apart, not impossible, but hard. Yeah. OK, so most people feel like, wow, much more changed than I thought. I can't resist one more hilarious demo. So have you already seen this like in 9.00? How many people have seen this before? Oh, I don't want to bore you. Do you mind seeing it again? We could skip it. It's kind of fun. Sorry? Do it? OK. OK. OK. So you're just going to watch this video and just track things because there may be changes happening here and there, and you want to notice them, OK? Here we go. [MUSIC PLAYING] [VIDEO PLAYBACK] - Clearly, somebody in this room murdered Lord Smythe, who, at precisely 3:34 this afternoon, was brutally bludgeoned to death with a blunt instrument. I want each of you to tell me your whereabouts at precisely the time that this dastardly deed took place. - I was polishing the brass in the master bedroom. - I was buttering his lordship's scones below stairs, sir. - I was planting my petunias in the potting shed. - Constable, arrest Lady Smythe. - How did you know? - Madam, as any horticulturist will tell you, one does not plant petunias until May is out. Take her away. It's just a matter of observation. The real question is, how observant were you? Clearly, somebody in this room murdered Lord Smythe, who at precisely 3:34 this afternoon, was brutally bludgeoned to death with a blunt instrument. I want each of you to tell me your whereabouts at precisely the time that this dastardly deed took place. [END PLAYBACK] NANCY KANWISHER: Totally different guy. [VIDEO PLAYBACK] - I was polishing the brass in the master bedroom. - I was buttering his lordship's scones below stairs, sir. - I was planting my petunias in the potting shed. - Constable, arrest Lady Smythe. [END PLAYBACK] NANCY KANWISHER: It's actually an ad. But it doesn't hurt to get its message. All right, it's a British ad with an important message. But it's a pretty impressive demo too, isn't it? How many people feel like lots of stuff changed that they didn't notice? Yeah, that's intuition. So the idea here is that all that stuff hits your retina. All of it kind of gets processed to some degree. But it's amazing how much of it goes unnoticed. OK? All right, oops. Sorry. OK, you might think, OK, is this something that only happens when it doesn't really matter? OK, you were looking for changes. But your life didn't depend on it. What about commercial pilots? So here's a classic study where they brought in actual real commercial pilots with thousands of hours of flying experience. And they had them fly in a flight simulator. They had them land a plane on a runway in the flight simulator under foggy conditions. And I forget what percent. I have to cheat and look at my notes. It doesn't say. It does. I'm just not seeing it. Some large percent of the pilots never saw this plane sitting right there on the runway. They landed the planes in the simulator right through that plane. And when they were shown it subsequently, they were shocked and couldn't believe they didn't see it, right? They were using a heads-up display that tells them their orientation to where the runway is. And they're paying attention to this stuff, landing on the runway. And they just do not even see that very relevant thing, OK? So it's not just something that happens in like weird psych demos and funny movies. Even for very important things, people miss absolutely major, important information. OK, so what all this tells us is attention is a filter that lets some information into our awareness but completely filters out of awareness a lot of other information that lands on your retina or your cochlea. OK? And there's two key properties of attention that go hand-in-hand. One, those capacity limits we've been talking about. You can't do everything at once. And two, selectivity, that is there's some way some subset of that information comes in some way to select it, OK? All right, so let's say a little bit more about attention. There's lots of different ways and forms of attention. So let me give you a few key distinctions about attention. So far, it's just kind of a vague word that gets used lots of different ways. So long ago, the great Helmholtz thought about attention and realized that there were two different kinds of attention. So he noted way back in 1860, our attention is quite independent of the position and accommodation of the eyes and of any known alteration in these organs and free to direct itself by a conscious and voluntary effort upon any selected portion of a dark and undifferentiated field of view. This is one of the most important observations for a future theory of attention. Indeed, it is, right? So his point-- I think we did this at some earlier lecture. His point is you can fixate on my nose right now. OK, everybody fixate on my nose. Not that it's that fabulous a nose. It'll just serve for the demo right now. And if I hold up different numbers of fingers out here-- keep fixating on my nose. No cheating. OK? How many fingers are off to the left side of my nose from your perspective? OK. How many fingers are off to the right side? OK. You can do that. You can pull up this information or pull up that information without moving your eyes. OK? That's called covert attention because the input is identical. And you are just somehow adjusting the dials on your perceptual system to pull up this or pull up that. OK? So that's called covert attention. To be unconfounded from overt attention, which is actually the most powerful kind of attention. Overt attention, you move your eyes from point to point to select different information. OK, so overt attention is a much more powerful filter. And we make about two to four eye movements a second. And it's very powerful because the center of gaze has very high resolution information. Remember high density of photoreceptors at the fovea in the center of gaze, and the density tails off in the periphery. So we have much better information in the fovea than in the periphery. OK, so it's both photoreceptor density, and it's also cortical area. So in retinotopic cortex, back here, primary visual cortex, V1, V2, those regions, you have about 20 square centimeters of cortex, like that area of cortex-- pretty big-- something like that processing the central two degrees of vision. OK? Huge cortical area devoted to a tiny little part of the visual world. And much less per degree for the periphery. OK, so both at the photoreceptor stage and at higher-level processing stages, we vastly over represent the center of gaze. OK? That's why if you have loss of foveal vision and macular degeneration, it's really awful. You can't read or recognize faces. That's where you need this fine-grained foveal information. OK, so that means that moving your fovea and parking it on different parts of the array is an extremely powerful way to select different kinds of information. So to give you a feel for how much people do this and how they select information, here's a video. This is taken from a head-mounted camera of a person who is watching these two people. And the yellow dot is where their fovea is. So they're talking to my former postdoc Matt. And they're fixating on his face as he talks. And sometimes they take a peek at the other person. And you can see there's very fine-grained adjustments of where they look as this goes on. And now they're walking down the hall in this building, and they're following her. Watch what happens when they turn the corner. Check out the signs. Read the little notices. Look down the hall. And then watch what happens when they go around the corner. Oh, new person. Look at the new person. Right? Sorry, it's too dark to see what's going on here. Oh, this is a little fixation patch where they're recalibrating the eye tracker to figure out to make sure it's working. Here comes a new person. They look at his face. Saccade back and forth between the two faces, right? Whoever's talking, you look at them. And you collect high-resolution information from that person's face. With both head turns, you can see their both head turns and eye movements. OK, you get the idea. I didn't totally explain. So this is a head-mounted camera and eye tracker, OK, of a person who was walking around the building encountering people. And it was showing you both what was in their field of view and where they were fixating their eyes in that view. So you can see there's really fine-grained, moment-to-moment sampling of visual information all the time with overt attention with eye movements, OK? Make sense? OK. OK, first, so that's the first distinction about attention, covert versus overt. OK? Now, you might say, why would we bother with this subtle, sophisticated covert attention when we can just move our eyes and do overt attention? And I think there's a bunch of reasons for that. One is other people can see where you're looking. And there's sometimes you want to attend to stuff over there and not let people know, right? So there are many examples of this from elevator eyes, people who look you up and down. It's not politically correct. It's not nice. It's not considerate. People notice if you do that. So don't do it. You can covertly attend below the face, if you like. But don't overtly move your eyes down there because they will know. OK? We primates are very, very attuned to where each other are looking. We are the only primate who has whites around the eyes. I just heard a talk last week by Michael Tomasello, who's one of the major primatologists. And he says that the fact that humans have whites of their eyes that make it so easy to tell where they're looking must mean that we mostly want to share information with each other about where we're looking. We're a very cooperative, communicative species. But not always. Sometimes we want to sneak a peek at something where people don't know. The case that I notice all the time is I'm at a conference and some very familiar face comes along and says, hey, Nancy. How are you? And I'm thinking, who the hell is this? And the whole time, I'm thinking I can see the name right there. I can't quite resolve it. I know that if saccade down for a half a second, they will see it. And I will be busted. And so I'm sitting there with a name right there racking my brain trying to think who the hell it is. Anyway, so there are many cases like this where we don't want to be caught looking. So those are all cases where you might want to use covert attention. Also, sometimes you want to track lots of things in parallel, and you can't foveate all of them. You can only foveate one thing. So I don't know anything about sports. But think of your favorite complicated sports example where there are multiple players moving at once and you need to keep track of all of them. That would be a classic case of covert attention because you could covertly attend to several of them at once. And you have only one fovea. Well, you have two, but they go to the same place. OK, so that's the second distinction overt versus covert-- or the first one. Right? So the next basic distinction of different kinds of attention are the kind where we decide where we want to look versus the kind where the stimulus draws our attention, OK? So there are lots of examples. Like web pop ups are all designed to pull your attention, overt and covert. Even you don't want to look at that damn thing. But the weird, little dancing figure, they're all optimized to pull your attention over and make you read the ad, right? And they're pretty effective. OK, so that's called stimulus-driven attention or exogenous attention. It comes from the outside. And it pulls you in. OK? Another classic example is pop-out. If you look at this display, it's very hard not to notice and have your attention drawn to that red thing. Certain properties of stimuli that just automatically capture your attention. OK, that is to be contrasted with voluntary controlled attention. That's like the case we did before where you were fixating on my nose. It was totally up to you whether attend-- I mean, I told you to. But you could have made up your own mind and attended to neither. And I wouldn't have known, right? So you can decide what you want to pay attention to. And as I think I mentioned in I don't know what context a few weeks ago, thank God we can decide what to pay attention to. Because there's lots of stuff that goes on in the world that we don't want to have dominate our mental life. And controlled or voluntary attention is one way that we can have our mental life dominated more by things we want it to be dominated by and less by things we don't, right? So that's that notion here. Like for example, you're sitting here. And it's like noon. And you're just thinking, OK, I'm really hungry. I'm really hungry. If you were my dog Charlie, your entire mental life for the whole next half an hour would be, I'm so hungry. I'm so hungry. I'm so hungry. But you're a person. And so that thought may impinge now and then, but you can drive it away and think about something else. Because it's not going to get you anywhere to focus on how hungry you are. I know I just made it worse. I'm sorry about that. OK. OK, the way that scientists have typically studied voluntary attention is they ask subjects to do this. And much like the case we just did, here's kind of very basic paradigm. You have subjects fixating on a cross. And typically, in covert attention experiments, you want to make sure there aren't overt eye movements. And so you'll have an eye tracker to make sure the subject is keeping their eye on that cross. OK? And then you give them a little cue that says pay attention off to the right, but don't move your eyes, OK? And then shortly thereafter, a little cue comes up. And you have to hit a button quickly that you detected the target. OK? It comes out at variable delays. So you can't just hit the button. OK, if you do that, you find that the reaction time to detect that target, if it's where you're attending, is really fast, not much more than 200 milliseconds, which is damned fast, right? Now consider a case where you're instructed to attend over here. And the way you get these experiments to work is that these cues are valid 80% of the time. So there's a reason for subject to-- there's a motivation for them to follow your instruction. So they're trying to answer quickly. And there's like, OK, they're ready. It's almost certainly going to be over here. But oops. It's not. OK? Then reaction time is almost 100 milliseconds slower, which is a huge effect in psychology, 100 milliseconds. OK? OK, so you're much slower to detect something if it's at an unattended location. OK? And if you have a neutral trial, you're somewhere in the middle. OK, so that is exogenous or voluntary attention. OK, and again, this is covert attention. No eye movements. All right, so third distinction, the example I just gave you was spatial attention. You're attending to this location or that location in space. OK? That's probably the most powerful and common kind of attention is usually the things you're interested in are in a particular place. And you attend to that place. And there you go. But it's not the only kind. You can have feature-based attention. So if your utensil drawer is a hell of a mess like mine is, like this, if your task is to find the black vegetable peeler, best of luck to you. It'll take quite a while. It's in there. It's right there. But it takes a while, right? In contrast, if the task is find the purple rubber spatula, duh, right? OK, so that's feature-based attention. You don't know the location in advance. You know something about the feature you're looking for, that it's black or that it's purple or round or square or whatever the feature is. OK, so different kinds of filters you can set on your attention system. OK, so that's just to give you some of the phenomenology of the different kinds of attention. How does all this work in the brain? I'm not going to tell you how it works. We'll just focus mostly on what the brain regions are. OK, so let's go back to this case here, voluntary attention. So now the question is, what happens when you get this cue? You're not going to move your eyes. But you're kind of cranking up that part of space. OK, what's going on in the brain right there before the stimulus comes on? Everybody get the question? OK, you can sort of feel it happening. But what does that mean? Well, this is very amenable to a simple functional MRI experiment. And in the early days of functional MRI, a whole bunch of people at once realize, oh, right, we can answer this longstanding question of whether early retinotopic parts of the visual system are affected just by a cue like that, even before the stimulus comes on, right? And I think it was 1998. A whole bunch of people started doing this in 1997. I was one of them. But a whole bunch did it once in 1998. I think like six papers were published all doing versions of this next thing, which is one of those obvious ideas. We got scooped. But anyway, everybody got the same result. So basically, if you just do a simple comparison-- this is the version they have. Subjects are fixating here. There's two stimuli. They're looking in V1 and V2. And if you just do a contrast of the case where you're looking here and you're attending, waiting for a target here versus waiting for a target there without moving your eyes, you get a big contralateral modulation in V1 and V2. This is a slice near the back of the head here. So this is a piece of calcarine sulcus or primary visual cortex, right? And so you can see, even before the target stimulus comes on, just attending over there gives you a big baseline increase in neural activity. Everybody clear what's going on here? And of course, you can do the opposite one and see that it shifts to the opposite side of space. OK? OK, so that's all before the stimulus comes on. You can think of it as priming the brain, kind of juicing up those neurons getting them ready to go, so that, when that stimulus comes through, boom. Your reaction time is faster. OK? OK, so that's modulating parts of visual cortex before a stimulus comes on to get it ready. But we can also look at, what is the effect on a stimulus once it comes through, an attended stimulus or an unattended stimulus? To show you that, I'm going to show you a kind of feature-based attention and an ancient-- this is a paper we published in 1998, I think. This is a nonspatial version. So we gave subjects-- guess what-- faces and houses and a little crosshair. And we had subjects in different blocks say whether the two arms of the cross were the same or different length, OK? So vertical one is different length. So that's a different case. Whether the two houses are same or different or whether the two faces are same or different. OK, so actually, it's spatial and feature, when you think of it, because they're in different places. OK? So we then looked in a bunch of regions. I'll show you the data from the fusiform face area. And the key thing about this is, in all three tasks, we had the identical stimuli. So there's always faces and houses and crosses in the same place in your visual field, in all of those blocks. It's just a matter of which of those things you're paying attention to. OK? So what do you think we'll see in the fusiform face area in this experiment? Is it going to care whether you're attending to faces or attending to houses or attending to the crosshair? Same stimulus all the time. Yeah, Jimmy? AUDIENCE: [INAUDIBLE] when you're not attending. But when you're attending, [INAUDIBLE] NANCY KANWISHER: Exactly what happens. OK, everybody hear the prediction? You'll still get something. But you'll get more when you're paying attention to it. OK, so here's the FFA response over time. Here's the time course. And these gray blocks are the blocks. These are the different attending to houses, faces, color, houses, faces, crosshair, houses, faces, crosshair. The gray bars are when the subjects are attending to faces. Now, here, I actually can't quite tell. Yeah, I can sort of tell. You can't see it on the screen. But there are little rest periods between those blocks. So I'm trying to figure out if you're right that there's some response greater than baseline. I think there's actually very little selective response greater than baseline. By design, we made this a whopping attentional manipulation, so that all of the tasks are really difficult. And when you're doing one of them, you just barely feel aware of anything else. In fact, I vividly remember the first time we ran this experiment, I was a subject in the scanner. And I did the first block. And I was doing just crosshairs. And I went through the whole crosshair thing. And it wasn't till the end of the experiment I realized, oh, right, there were faces there. I was just completely unaware of them. So we designed it to really so tie up your mental capacity that you just didn't have processing resources left for anything else. And so if there is a response to everything else here, it's really low. But you can see, certainly, the attended thing is much more. It's much stronger in the attended case than the unattended case. OK? So this is a general property. Like basically, all of the perceptual regions we've talked about are strongly modulable by attention, OK, from retinotopic regions on up. OK, including V1 and, in fact, even including the lateral geniculate nucleus. So one synapse up from the retina, you're already modulating activity there by attention. OK? I know I said this, but it maybe went by briefly. You have 10 times as many connections down from cortex down to the LGN as you have going forward. OK? One of the things they're doing is setting up selective filters, so that only the stuff you want to process makes it to higher stages. OK, Yes? AUDIENCE: So does this sort of phenomenon generalize across the brain areas? I mean, it's like-- NANCY KANWISHER: It's generally true of pretty much everywhere, yeah. AUDIENCE: It might be the intuitive physics thing is justified. So it's better to see the same stimulus. NANCY KANWISHER: Exactly. It's an instance of this, exactly right. Exactly right. It gives us one of our paradigms. In functional MRI studies, have the same stimulus, vary the task. OK? Some things don't work as well, things that are just very dominant stimuli. In this experiment, if we hadn't put the faces in the periphery and given subjects a damn difficult task, the modulation wouldn't have worked. Because faces are just so dominant, they're going to punch through even if you don't want to, OK? Like web pop ups. You also-- I'm going to skip this because we won't have time. You could get very similar things in primary auditory cortex listening to high frequencies or low frequencies. These are all the same stimuli. This is voxel selected for low frequencies, voxel selected for high frequencies. You have a high frequency input in one ear, low frequency in the other. As you switch between, those responses toggle. OK, so all of that tells you the effects of attention, how it modulates activity all over the brain. But what is the source of attention signals in the brain? And that source is a set of regions we have encountered before, sometimes called the frontal-parietal attention network, these blue and green bits up in here, back here in the parietal lobe, up here in the frontal lobe. And they are active when you shift attention from one location to another, from one feature to another, or when you do any difficult attention-demanding task. We've also talked about them in the other context of the multiple demand system. It's pretty much the same set of cortical regions. That pretty much any time you do a difficult task, almost no matter what the task is, these regions turn on. So they're not just about visual attention. They're kind of about basically controlling your mind, right, selecting information, making yourself do difficult things. Who knows what that is computationally. But these regions are very systematically engaged. And they're particularly systematically engaged when you're shifting attention over different locations. OK, so just to remind you in contrast, everything else we've talked about, face areas, music areas, language areas, they're very specific for one mental process, domain specific. These are the opposite. They're ludicrously domain general. OK? All right, I'm going to skip the video. When the system is damaged bilaterally, all kinds of weird stuff happens. OK? I'll post this. If you guys send me an email to remind me, I'll post this clip in. Balint's syndrome, where you have bilateral damage back there in these parietal regions, people can only see one object at a time. OK? They're looking at a complicated thing like this. And they would see, I see Shosh. See anything else? No just Shosh. Nothing else there? No. Just Shosh, right? Totally weird. Anyway, so they get locked on one thing. They can't shift attention. OK. OK, I want to talk at least briefly about awareness. So let's talk a little bit about neural correlates of awareness. And so first question is, if we want to study perceptual awareness and not just perception, how are we going to uncouple those things? We've sort of talked about a few examples with attention. With attention, the thing you're attending to is at least much more dominant in your awareness than the stuff you're not attending to, even if the other stuff seeps in a little bit. So that's one way, where you have the identical stimulus. But you were varying the degree of awareness. But another way-- oh, I need you guys to pass out the glasses. You want to both do that at different sides? OK, so I'm going to show you a stimulus. These guys are going to pass around glasses. And let me say in advance, I want them back. I use these every year. So drop them off here when you leave. OK, so what you're going to do, you put them on either way. Doesn't matter. OK, so first, we're going to do optics. Nothing interesting. So look at this stimulus and close one eye. OK, now look at it through the glasses. Look at it through the glasses, and close one eye. And you should see just a face or just a house. And if you close the other eye, you should see the opposite. Does everybody get that? OK, that's not psychology. That's optics. The glasses are just filtering one image into one eye and another image into the other eye, OK? This is all this is is a way to get different information to your two eyes. OK, now just look with both eyes. And don't do anything. Just kind of look. And if anything cool happens, you can kind of go, ooh, aah. Or you can tell me what's happening. Just watch. Doesn't always happen immediately. Yeah, Kwylie, what's happening? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Sorry. What's that? AUDIENCE: Every time I blink, it changes from the house to the face. NANCY KANWISHER: Aha. OK, it changes from the house to the face only when you blink. Yeah, Kwylie? AUDIENCE: So at first when I first put it on, I was like I saw the house and the face superimposed. And then you told us put one eye. And then I opened both eyes, and it went from the house to the face. NANCY KANWISHER: Uh huh. Uh huh. And then did it stay there? Or does it keep flipping? OK, if this is not working for you, don't panic. There's nothing wrong with you. It's hard to get everybody's red-green balance the same. Yeah. Sorry. Question? Shardul, yeah? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: So I share your intuition. I think I can choose too. But there's actually a whole literature on that. And the guy I was collaborating with on this project, 20 years ago, insists that, actually, that's a wrong intuition. You can't choose. And I said, oh, the hell. The hell I can't. I'm in the scanner being your best subject ever because I'm switching exactly when you want me to switch. Don't tell me I can't switch. He said, no, the literature says you don't. Anyway, I don't know the latest on that. Yeah? AUDIENCE: Kind of related, it feels a lot harder to see the house than the face. NANCY KANWISHER: Yeah. AUDIENCE: I can see the face a lot easier than switching. NANCY KANWISHER: Yes. It's not optimally set up, so that it's perfect for each. Really the reason I do this is to amuse myself looking at all of you with your-- thank you. OK, so you can keep looking or not, but I'm going to tell you what's going on here. The cool thing about this is when-- if it didn't switch for you, the person-- is there anybody for whom it didn't switch? One. OK, so maybe your color vision or who knows what. But anyway. OK, if it doesn't switch, there's nothing wrong with you. The percept everyone else has is it just flips every few seconds from one to the other. OK. So what's cool about this is, when it switches, nothing changes on your retina. Only your state of awareness changes. And that gives us a wonderful lever to study what goes on in the brain when awareness switches, unconfounded from the stimulus. OK? It's like varying attention, but it's more powerful. So of course, we put people in the scanner. We put mostly me and a few others. But anyway. We popped ourselves in the scanner. We just taped these things on our forehead, nice low-tech experiment. And we looked at that exact stimulus. OK? And we scanned the brain while people sat there watching the stimulus flip back and forth. OK? So this is a stimulus for the whole experiment. This is the percept. It switches back and forth. The subject has a little button box, so they can say, now I see the face. Now I see the house. Now, of course, the choice of a face and a house was not random. Why did we choose a face and a house? So that we could look at the response in the FFA and the PPA. FFA loves faces, hates houses. PPA loves houses, hates faces. Perfect. Right? So the question is, are they going to switch when your percept switches, even though nothing's changing on your retina? What do you guys think? Switch? Switch? How many people think it's going to switch? How many people think it's not going to switch? A few. OK, all right, so that's what we did. Here is now the raw MRI time course averaged over the FFA and averaged over the PPA for one five-minute experiment or three-minute experiment in one subject. Probably me. I forget. Again, stimulus is the same the whole time. The letters are the times when the subject pressed the button. And you can sort of get a sense that there's a little bit of maybe like here's a phase the FFA response goes up. Here again the house goes up. But it's kind of hard to tell. So really, the way to analyze these data is to take all the face to house flips, right, and align them and signal average to get rid of noise. OK? So we can look at, what happens before and after a switch from face to house? And what happens in the opposite direction switch? Everybody get the idea? It's just a way to clean up the noise here. And here's what happens. This is time zero. The subject presses a button saying that their percept has flipped between a house to a face. The response in the FFA shoots up. And the response in the PPA shoots down. Isn't that cool? OK, now why are these peaks and valleys after the button press? Yeah. Evan, right. There's a delayed signal, right? OK? This is harder. Why do they come back together out there? Yeah, because it flips back. It flips back. And all we've done is signal average to one button press. But there are different durations of percepts. Sometimes it lasts three seconds. Sometimes it lasts 10 seconds, everything in between. By the time you're 12 seconds out, most people have flipped again. That's why it goes back. OK, make sense? Yeah? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: I don't know. And I'm not sure that that was generally true. I hadn't noticed that before. But we should scrutinize the images. I can't actually remember if this was averaged over subjects or if this is one subject. I suspect that's a fluke. But we could look at it and see. OK, so the point of all of this is that these regions, the face area and the place area, care about what you are experiencing, unconfounded from what's hitting your retina, right? They are reflecting the contents of your awareness, not just what's coming into your eyes, which is cool. OK, so that shows that these regions kind of track awareness. OK? But can we find any evidence for perception without awareness using neuroimaging? OK? So to make this point, I'm going to accelerate slightly, but I think there's just barely time. There are lots and lots of studies of this genre. And I'll tell you about one. So I'm going to show you. I'm going to flash up a very rapid series of digits. You have to get ready to write stuff down. Your task is to see if there are any letters in the digit. There might be zero. There might be one. There might be a few. Write down any letters you see. It's going to go by really fast. OK, everyone ready? OK. Here we go. OK, write down any letters you may have seen. OK? All right, let's do another one. Everyone ready? OK, write down any letters you may have seen. OK? OK, raise your hand if you saw both the A and the P in the first sequence. A few of you. Less than half. Raise your hand if you saw both the X and the H in the second. Almost everyone. All right, well, I cheated slightly. But never mind how I cheated. The way you cheat in demos-- you guys will be teaching someday. You have the one you want people to get wrong first. It's a total cheat. So it's slightly cheated. But if you do it right-- it didn't work last year. That's why I cheated. Anyway, even if you do it right in the lab, you get a really strong effect. There was only one digit between the A and the P in the first sequence. And it's like your brain is still dealing with the A when the P comes along. You just don't even see the P. You're looking for letters. You don't see it. Whereas there were three digits between the X and the H in the second sequence. And just to show you some real data. So this has been done lots of different ways. Here's an example of this experiment. And this is the time that people get the second letter when they don't have to report the first. Well, the first one's colored, right? So you either have to report the colored letter and detect the letter after it or just detect the letter after the colored thing. So if you don't have to report the first one, there's no dip in accuracy. Oh, sorry. This is a function of the distance between the two items in the sequence. But if you do, there's a big dip. That's maximal with one intervening item. Yeah? OK, well, here it says two or three. But anyway. OK, so that's called the attentional blink. And the idea is there isn't a physical blink of the eyes. But your attention system is tied up processing the first target. And it doesn't get the second one, OK? So there are dozens, probably hundreds of papers on this phenomenon. It's pretty cool. But for present purposes, we're going to use it to say, does that unseen second target get into the brain? Presumably, it lands on the retina because you didn't blink. So it could probably get there. Probably got to V1. How far up the system did it get? Can you think of how we might test this using functional MRI. How would we see how far those stimuli go in an experiment like this? What would we use? What would we measure the MRI response to? How could we design a version of this that you could do with functional MRI? We need some MRI response, where, if we get that response, we know that it's a response to a given stimulus. What would we measure? And what stimuli would we use? AUDIENCE: Well, we could use faces or something. NANCY KANWISHER: Yeah. It's always the same. Yeah. Faces and houses. Lots of ways to do this. But it's not even my experiment, and they used faces and houses. OK, so they did a version of that very same thing. This is a bunch of garbage flashing on. Early on, there's a face. The subjects have to say which of those three faces they've studied it is. And then after that, a scene comes on, or it doesn't come on. OK, they have to just say, was there a scene? First of all, which face was it? And was there a scene afterwards? OK, so it's a variant of the thing you just did. So here's the behavioral data. If you don't have to report the face, then you are very accurate reporting the scene. If you do have to report the face, then you're very bad at it if the scene comes right after. But if there's a big interval in between, you do OK with the scene. Same thing we saw before but now with faces and scenes. OK, that's the behavioral data. What do you see in the PPA, right? The second target is always a scene. So what you see in the PPA, here's a response in the PPA to that second scene if it's a hit. That is if you detect it, right? You correctly say, yes, there was a scene in that trial. Here's a response with a correct reject. There was no scene in that trial. And you correctly said there was not. OK, so that difference shows you the response in the PPA to a scene that you've seen, that you detected consciously. But here's a critical case. I don't know why there's a little pop up there. Anyway, this is a case where there was a scene, but you've said there wasn't. OK? That's called a miss. You missed it. It was there, and you didn't see it. And what it shows you is two cool things. Well, first of all, it tells you that the more aware you are of the stimulus, the stronger response. But the crucial thing it shows you is this difference right here, which is significant. This is a kind of perception without awareness. You did not detect the scene. You said there was no scene. But your PPA detected the difference. Everybody get that? So we have evidence for perception by the PPA at least of a scene without awareness. That make sense? So there are many, many studies like this that kind of take it every which way. This is just one little example of how you can use neuroimaging to ask, how far up the system does an unseen stimulus get? Make sense? OK, so we are about out of time. And I just gave you a little taste of some of the work on neural correlates of awareness, showed you, with binocular rivalry, a neural response that's correlated with awareness unconfounded from the stimulus, uncoupling, again, perceptual awareness from what's hitting your retina, and, in the case of the attentional blink, some evidence for perceptual representation without awareness. OK? |
MIT_913_The_Human_Brain_Spring_2019 | 1_Introduction_to_the_Human_Brain.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: All right. It's 11:05. I'm going to try to start promptly at 11:05 each time. So welcome. Is everybody psyched? I'm psyched. This is 913, the Human Brain. I'm Nancy Kanwisher. I'm the prof for this class. And lest, you were wondering, I have a brain, and there it is. That's me, with some bits colored in that you will learn about in this class. OK. What I'm going to do today is I'm going to tell you a brief story for around 10 minutes. And then I'm going to talk about the why, how, and what studying the human brain, why it's a cool thing to do, how you do it, and what in particular we're going to learn about in here, and then we'll do some mechanics and details of the course, and allocation of grades, and all that. It's on the syllabus anyway. Cool? That's the agenda. All right. So let's start with that story. And for this, I'm going to sit up here. The story isn't that long, but it has a lot of interesting little weird bits. So I have cue cards to remind myself of all the bits I want to remember to say. So you can put away your phones and your computers. And you don't need to take notes. This is just a story. It's going to foreshadow a lot of the themes in the course, but it's not stuff you're going to be tested on. OK. So this is a true story, and I've changed only a few tiny little bits to protect the identity of the people involved. But otherwise, it's an absolutely true story. It's a story about a scary medical situation that happened to a friend of mine a few years ago. But at the same time, it's a story about the nature of the human mind, about the organization of the human brain. And it's also a story about the ability or lack thereof to recover after brain damage. It's also incidentally a story about resilience, privilege, expertise, and all of those things that are characteristic of many people in Cambridge society, not so relevant for the course, but, all right, here goes. So a few years ago, a friend of mine was staying over at my house in Cambridge en route to a conference in a nearby state. And this guy, I'll call him Bob, was a close friend of mine. I'd known him for years and years. We talked regularly. We went on hiking trips together. We were pretty close. So he's en route to this conference. He's staying over at my house the night before. The plan was for him to get up early the next morning and drive to the conference. So we hung out the night before and chatted. And the next morning, he's sleeping in the next room over from mine. And early in the morning, I hear some shuffling. I think yep, OK, Bob is packing to leave and thank, God, I don't need to get up. I'm only dimly awake. And so I'm not paying that much attention. Shuffle, shuffle, shuffle in the background. And then I hear a crash. And I think, what the hell is that? And I get up and I go into the next room. And Bob is lying on the floor, not moving. I say, Bob, and there's no answer. And then I shout, Bob, and there's no answer. And then I dialed 911. While we were sitting there waiting for the ambulance to arrive, Bob starts to wake up. And he's very woozy, but he's alive. And he's making a little bit of sense. And he can't figure out what's going on, and neither can I. And so we're talking and chatting, and he's making a little more sense, but we still don't know what's happening. So then the ambulance arrives incredibly fast. I felt like three minutes, boom. There's three EMTs rushing in the front door, rushing up to the room where Bob was. And they take all his vitals. And they can't find anything wrong. And so they're really casual. I guess they confront stuff like this all the time. I don't. Bob doesn't, but they're very calm about it. And they're saying, well, go take him to the hospital or not. And I was like, I think we need to know what just happened, even though we seems OK. We kind of need to know what this is all about. Don't you think? They're like, yeah, you could take him to the ER. And I said, well, do we need to waste ambulance resources, or do you think it's safe if I drive him myself, since there's a hospital not far away? They say you could drive him yourself. So I drive Bob to the Mount Auburn Hospital ER, which is like less than a mile from my house. And we do the usual ER thing, which is mostly waiting, and waiting, and waiting, but various docs come by. And they take all these tests. And they take all these history questions, and it goes on and on. And basically, they're just not finding anything. So after about an hour or two of this, they're still doing tests. They don't want to quite let him go yet, because they don't know what happened. Everybody's calm about it. I figure, OK, fine, I got work to do. And I tell, Bob, text me throughout the day, and I'll come get you whenever they're ready to release you. And so I go into work, but just before I go into work, a thought flashes through my mind, and I say to the ER, doc you should check Bob's brain. And the reason that thought flashed through my mind is that actually I had been worrying about Bob for a number of years. And I hadn't really-- it hadn't quite registered consciously, It was kind of too horrifying a thought for me to really allow myself to realize I was worried about Bob's brain, but I was worried about a very particular thing and that is that Bob had been showing these weird signs that he often got lost and didn't know where he was. And on the one hand, this just didn't make any sense, because he was fine in every other way, but it was really pretty striking. So one time, I was over at Bob's house with some other friends of ours. And the friend asked, Bob, how do we get-- how do I drive from your house into Cambridge? And Bob said, well, you go to the end of the driveway, and you turn left. My friend and I looked at each other like, Bob, what? And Bob thinks about it for a minute, yeah, end of the driveway, turn left. I just had this like sinking feeling of dread in the pit of my stomach, but we sort of made light of it, and made fun of it, and it went by. It was like, no, you turn right, and we gave the directions. Another time a friend of mine was driving with Bob in Bob's hometown. And notice that like Bob didn't seem to know how to get to the grocery store in his hometown, where he'd lived for a really long time, a trip he'd made hundreds of times. Another time, I was at a conference in Germany. And I saw there are these arrays of posters of people presenting usually pretty dry scientific things. And out of the corner of my I see the title of a poster and it says navigational deficits colon, an early sign of Alzheimer's. And I saw that, and I just saw ah, and I just kind of suppress the thought. I thought, oh my God, Bob, wasn't that old. I know Alzheimer's can very rarely strike early. I didn't want to think about it, but it was like rattling around in the back of back of my consciousness. So there had been these signs, but as I say, it didn't make sense, because Bob was holding down a very high-powered job. He was writing beautiful prose. He was the life of every party he was at witty, funny. Everybody's like favorite life of the party. So how could that be? It just didn't make sense that there would be anything wrong with Bob's brain. So I managed for a few years to notice these signs and ignore them and not pay any attention. So the killer thing is, I should have known better. My research for the last 20 years has been on the very fact that there are different parts of the brain that do different things. And one of the corollaries of that is you can have a problem with one of those parts and the other parts can work just fine. And so I, if anyone, should have realized, yes, there's something really wrong with Bob's navigation abilities. And the fact that he's smart, and witty, and funny and holding down a high powered job doesn't mean there isn't something wrong with his brain, with a part of his brain. But I didn't realize that. But then, as I'm leaving the ER, theyit kind of all clicked. And I said to the ER doc, you better check his brain. I thought Bob was out of earshot when I said that. He heard it. He's like what? I was like, oh, never mind. Anyway, the ER doc with the kind of confidence that only docs can muster said, no, not a brain thing. This is a heart thing, which wasn't exactly reassuring, but I set aside the brain thought. And I went off to work. So throughout the day, I texted with Bob a few times. Things seem to be fine. They've done more tests. They weren't finding anything. We just got calmer and calmer about it. I guess sometimes weird stuff happens, and you just move on. But then that night around 7:00 or 8:00 at night, I was over at a friend's house, and the phone rang. And it was Bob. I picked it up and Bob says, get over here. They found something in my brain. So I ran out of the house, grabbed my phone. And as I'm driving to the Mount Auburn ER, I called my trusty lab tech, an amazing guy, who keeps track of all kinds of things much better than I do, and I said, I remember that we scanned Bob a bunch of years ago for a regular experiment in my lab. And I don't remember the date. I don't remember anything about it, but dig around in the files and see if you can figure it out. It might be useful to have that scan. So by the time I get to the ER, my lab tech has already texted me back and said found the scans. I'm putting them in a Dropbox for you. So I go into the ER, and there's Bob and the ER doc. And Bob says to me, do you want to see it? The ER doc or the radiologist has already shown Bob the picture of his brain. And so they take me in there. And I look at it. And I gulped. There was a thing the size of a lime smack in the middle of his brain. Pretty terrifying. So this lime in the middle of Bob's brain was right next to a region that my lab had studied in great detail. In fact, my lab had discovered that a brain region right next to where that line was located was specifically involved in navigation. How could I not have put all this together? But I didn't until that moment I thought, of course, of course, there's a thing in his brain right next to the parahippocampal place area, which I discovered, and a nearby related region called retrosplenial cortex, of course. And how the hell could I not have known? But I didn't know. In that earlier work, it had been nearly 20 years ago, I had a postdoc named Russell Epstein. And Russell was a computer vision guy. And he wanted to understand how we see by writing code to duplicate the algorithms that he thought go on in the human brain when we understand visual images. And that's a very respectable cool line of work, which we'll learn a little bit about in here. And Russell was really a coding guy. At the time, we were just starting doing brain imaging, but Russell was like pooh poohing it all. It's like the flash in the pan. It's going to go by. It's trashy. So you guys get nice blobs on the brain. I'm not having any of it. And I kept saying, Russell, you need to get a job. Just do one experiment so you can show in your job talk that you can do brain imaging. It might help you. You don't need to do a lot of it. Just do one dumb experiment. Russell was interested in how we recognize scenes, not just objects, and faces, and words, but how do we know where we are and how do we recognize if the scene as a city, or a beach, or whatever it is? I said, OK, Russell, we'll just scan people, looking at pictures of scenes, and looking at other kinds of pictures. And we'll just kind of see if there's any part of the brain that responds a lot to scenes. It really was not well thought out. This is not how you should do an experiment. It shouldn't be based on political calculations, lack of theory, any of the above. But the fact is that's why we did that experiment. Russell needed to be able to show a brain image in his job talk. So we scan some people looking at scenes. And the results knocked our socks off. We found a part of the brain that responds very selectively when you look at images of scenes, not when you look at faces, objects, words, or pretty much anything else. And so we'll learn more about that later in the course. We called it the parahippocampal place area. And that launched a whole major line of work in my lab and now dozens of other labs around the world. Backtrack-- we'd already found that region. And here's this lime in my friend Bob's brain, sitting right next to the parahippocampal place area. Then I remembered, let's look at the scans from my lab from a few years ago in Bob's brain. I fiddled around and managed to download the files. And there it was. You could see that same blob. But in the scans from a few years before, it was much smaller. It was the size of a grape. That told us a bunch of things. Most importantly, it told us this thing is growing really slowly. And that was hugely important, because brain tumors are very bad news. And they usually grow really fast. And the fact that it grew really slowly told us that this was not one of the kind of worst, most invasive, most horrible ones. It was clearly a problem. It was big. But at least it wasn't growing hugely fast. But how poignant that there was in my own damn data, and I hadn't seen it in my friend's brain. Well, I'm not a radiologist. I'm a basic researcher. And I didn't look, and I didn't see it. Indeed, the next day, the docs told us that they thought this was meningioma, not cancer. Who knew that you could have tumors that weren't cancer? But you can. And they still need to come out, if they're big enough. And that's very serious. But it's not as bad as having a cancer in your brain. As we're collecting information, the next day, I'm hanging out in the hospital room. And there was an amusing moment when one of the residents came by. And he's taking the history and asking all of the basic questions. And I said kind of sheepishly-- because you don't want to seem like you know more than the residents. And in fact, I didn't really know more, but I just thought I'd provide a little information. And I said, he's actually had symptoms for a bunch of years, and there's a region of the brain nearby that I've actually studied a little bit. And the resident says, like, we know who you are. So much for my trying to stay under the radar. That afternoon, I talked to a neurosurgeon friend of mine, because I figured, OK, we need advice. We need help. And the neurosurgeon friend said, quote-- it got branded in my brain-- she said, "it is of paramount importance that you find the best neurosurgeon. It's the difference between whether Bob dies on the table or goes on to live a normal life." This is the privilege part of the story. I'm not that well connected, but I'm a little bit connected. And I kind of dug around, and did what I could. And we spent a couple of weeks, and we found the best neurosurgeon. And the night before the surgery, Bob is staying over at my house, because the surgery was in a Boston hospital. And I thought, I've been dancing around this for years, but now it's all out in the open. We know there's a problem. And I'm going to test him. I'm going to find out what the hell's going on. This is, after all, one of the basic forms of data that we collect in my field-- that is, testing people with problems in their brain to try to figure out what things they can do and what things they can't do. It's a way of figuring out what the basic components of the mind and brain are. It's actually the oldest, most venerable method in our field, and it's still a hugely important one. So I thought, what the hell. So I said, OK, Bob, draw me a sketch map of the floor plan of your house. Bob takes a few minutes and he draws this thing. And it was shocking. There weren't even-- the rooms in a rectilinearly arranged house, they weren't even aligned. There was, like, a soup of lines. There was no organization from one room to the next. And Bob kind of realized, this isn't right, is it? But he didn't know how to fix it. And he said he just couldn't visualize what it looked like to be in his house, and so he couldn't draw the floor plan. And I thought, OK, he hasn't been there in a couple of days. So I gave him another piece of paper and I said, OK, draw the floor plan of my house, where you are right now. Bob took a couple of minutes and delivered a similar mess. He couldn't even imagine the layout of the room next to him, that he'd been in a few minutes before. And then, trying to channel my inner neuropsychologist, I thought OK. Gave him another piece of paper and I said, OK, Bob, draw a bicycle. Why did I choose a bicycle? Because it's a multi-part object that has a bunch of different bits that have a particular relationship to each other, just as the rooms in a house have a particular spatial relationship to each other. And I wanted to know, is his problem specifically about places, or is it about any complex, multi-part thing that you have to remember the relationships to? Bob is no artist, to put it mildly. But his bicycle was clearly recognizable as a bicycle. It had the two wheels in the right relationship, and it had all of the basic parts in roughly the right place. I then had him draw a lobster, another multi-part object. And also, his lobster was not beautiful, but had everything in the right place. That's very telling he had a specific problem in-- I don't know-- imagining, reproducing, remembering? It's not totally clear. The arrangements of parts in a room, but not the arrangements of parts in an object. And we'll get back to that more in a few weeks. What do I want to say here? I said all of that. The next day, Bob has an 11-hour surgery. Major, hardcore, extreme neurosurgery. Remove a huge piece of bone from the back of your head, pull apart the hemispheres of the brain like this, go in multiple inches and remove a lime. Holy crap, right? Said lime was right near the vein of Galen. Galen lived, what, a couple of thousand years ago? The fact that there's a vein of Galen means it's a big-ass vein-- the kind of vein that even Galen would have found with dissection 2,000 years ago. This lime was all wrapped around and interleaved with the vein of Galen. Not good. But because we found the best neurosurgeon, and because we have extreme privilege and all of the possible medical resources and expertise you could possibly hope for, Bob sailed through the surgery. And an hour after the surgery, I'm chatting with him and he's making sense. Amazing, right? And literally, two days later, they sent him home. And a few days after that, he's back at work. No problem. Totally fine. But now we get to the question you're probably thinking about. What about his navigational abilities? The sad answer is, nothing doing. None of it came back at all. Thank god for iPhones. If Bob lived 30 years ago, he wouldn't be able to function. But he goes everywhere using his iPhone GPS-- everywhere. And this fact that he didn't recover his navigational abilities is consistent with the whole literature that we'll consider later in the course-- that, often-- not always, but often, if you have brain damage, especially to some of these very specialized circuits that we'll talk about, you don't recover later. If the damage is early, you may well recover-- early in life, you may well recover. Children have much more plastic brains that can adjust after brain damage. Adults, not so good. Bob's doing fine. That's my story. Any thoughts or questions? Yeah? AUDIENCE: Can he tell the difference between right to left [INAUDIBLE]? NANCY KANWISHER: Yes. Yes. And it's very interesting. There are many of his spatial abilities that are absolutely intact, and yet the ones related to navigation are not. Yeah? AUDIENCE: Can he drive? NANCY KANWISHER: Yeah, no problem. But he's always looking at his damn phone to get directions, or to listen to the GPS directions system. Driving is no problem. It's another kind of left-right-- the immediate spatial orientation abilities are absolutely fine. But knowing, where am I now, and how would I get there from here, is blitzed. Other questions? Yeah? AUDIENCE: Can he recognize familiar places? NANCY KANWISHER: Great question. Yes, he can recognize familiar places. What he can't do is, he can say, oh, right, that's the front of our house, or that's such-and-such cafe that's near our house. What he can't do is say, which way would you turn from there to go home? AUDIENCE: Can he can he string together multiple [INAUDIBLE]?? NANCY KANWISHER: Great question. Great question. A little bit. He can navigate a little bit with his GPS. And because he's learned certain routes as a series of almost verbal commands-- if you're here, turn right, then there, nur, nur, nur, nur. That whole kind of thing. It's not what any of you guys could do. If you guys are driving around in Cambridge or walking around campus-- remember when they blocked off this whole middle of campus a couple of years ago? It was so irritating. I would like go there, and it's like, oh god, they've blocked it off. I can't get over to lobby 7. Well, you immediately come up with an alternate route. It's like, OK, I guess we're going to have to do this. You come up with an alternate route. This is a normal navigation system can do. Bob can't do that at all. He's like, route blocked? No idea. Get out the phone. Yeah? AUDIENCE: Is he good at estimating distances? Does he know something is a certain number of miles away, or? NANCY KANWISHER: Yes. Yes, he is. And that's very interesting. But that seems to be kind of a different thing. You could think about all of the different kinds of cues you have for distance beyond your kind of literal navigation skills. Yeah? AUDIENCE: [INAUDIBLE]? NANCY KANWISHER: A little bit. A couple of minutes, yes. The next day-- I mean, it would be kind of like this thing. It's like, I sort of vaguely remember that when I was here, I turned right, so I'd better do that again. Yes, did you have question? AUDIENCE: Can he navigate within buildings? NANCY KANWISHER: No, not very well. And this is a problem, because iPhones don't usually-- yeah. New hotels, big problem. Finding the bathroom down the hall, or the front door in a hotel, big problem. Yeah. I mean, these are problems you can-- you can come up with workarounds. It's not life-threatening, but it's extremely inconvenient. Yeah? AUDIENCE: Is it the case that those navigational skills that develop long-term, like a long time ago are stronger? So he has a harder time developing-- for example, you said new hotels are a problem. But if it is places that are more familiar, like his home, is it easier for him to navigate? NANCY KANWISHER: It's a great question. And you might think that the kind of navigational maps you laid down long ago would be intact. So is it just that you can't learn new ones? It's a great question. The answer is kind of complicated in this case. For routes that he's memorized-- there's a whole different system for knowing a route, and really having an abstract knowledge of a place that enables you to devise a new route if something is blocked on that route. For highly over-learned routes, he's OK. He remembers the [INAUDIBLE]. It's like a memorized motor sequence. You do A, and then B, and then C, and then D. He's OK with those, with routes he learned long ago. But he is not good at coming up with a new route in a place that he learned long ago. We'll take one last question. AUDIENCE: Does he have conscious access to past knowledge that [INAUDIBLE] And does he have conscious knowledge that [INAUDIBLE]?? NANCY KANWISHER: No, he knows-- well, he knows, because when he tries to figure out which way to head, he has no idea. He's extremely aware of it, and very articulate on precisely what happens. What he says is, if he's looking at a place-- here's something he says. He's looking at a place. He knows where he is, because there's all kinds of other bits of information that tell you where you are, because you intended to go there, and the relevant things are happening, and all. So he knows where he is. And it looks familiar. If he tries to imagine what's behind him, he says that he starts to get it and it just kind of vaporizes. He just can't hang on to it. He can't kind of construct a stable mental image of nearby places. I don't know exactly what that means, but he's very articulate, and can report what happens-- what he experiences when he tries to access this kind of information. What you guys-- we'll go on. But what I want to say is, what you guys just did is exactly what we do in my field. We try to take a mental ability and tease it apart and say, is it exactly this or is it exactly that? And you guys all just did it beautifully. A lot of what we do in my field is kind of this common-sense parsing of mental abilities. What is a particular mental ability-- how does it relate to some other one? Are these things separable ? Can you lose one and not the other? Do they live in different parts of the brain, et cetera? All right. That's the story. I'm going to cache out some of the particular themes that came out from the story that will echo through this course. And the first and most obvious one is, the brain isn't just a big bunch of mush. It has structure. It has organization. The different bits do different things. Importantly, when Bob had this big lime in his head, he didn't just get a little bit stupid. No. His IQ, if he'd take an IQ test, would be unchanged. He lost a very specific mental ability. And that is fascinating, but it's also good news for science. Because often, when you try to understand a complicated thing, a great way to make progress is to first figure out what the parts are, and then later try to figure out, how does each individual part work and how do they work together? But if there's part structure, there's at least a place to start. Second theme is that some parts of the brain do extremely specific things. Not all of them. Some of them are quite general, and are engaged in lots of different mental processes. But some are remarkably specific. We'll talk a lot about that. Third big theme. The organization of the brain echoes the architecture of the mind. And I would say, the fundamental pieces of the brain are telling us what are fundamental parts of the mind. And that's why I'm in this field. That's what I think is cool. The brain is just a bunch of cells. It's a physical thing. Who cares about a physical thing? The reason we care about it is, that's where our mind lives. And if we study that physical thing, we can learn something about our minds. And that's pretty cosmic, I think. The point of all of this kind of work is not to say, oh, that mental process is here, not there. Who cares? I don't really care. I mean, at some point, you need to have a ballpark sense. You need to know to study the things. But the interesting question is not where these things are in the brain, but which mental processes have their own specialized machinery, and why those? Another important theme. How do brains change? Bob didn't recover after his brain damage, in that very particular mental function that he lost. If all of that had happened when he was five years old, he probably would have. How do brains change over normal development? How do they change from learning and experience? How do they change after injury? And the final theme echoed in that story is, there are lots and lots of different ways to study the brain. There are the simple behavioral observations. Bob can't navigate, but he can do everything else. OK, that's really deep and informative-- low-tech, but really powerful. The anatomical brain images that showed where the lime was in Bob's brain, that gives you another kind of information. What's the physical structure of the brain? The functional images that we had done in my lab to discover the parahippocampal place area, and the studies of what mental abilities are preserved and which are lost in people who have alterations of their brain. Those are just a few of the kinds of methods in our field, each of which tells us about a different kind of thing about the brain. Those are the themes I was trying to get at here. Let's move on to the why, how, and what of exploring the brain. I'm going to assign the TAs to get me to shut up at-- let's see. We're supposed to end at 5 minutes before the end of class, is that right? Is that the MIT tradition? OK, so at-- oh, my, shockingly soon-- 11:45, you're going to-- AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Oh, great, thank you. Thank you. This is one of the many things TAs are for. They pick up the hundreds of typos and "mindos" and all of that. Excellent. I'm thinking, how the hell did I so mis-time this? Thank you, Heather. OK, good. We'll go on. Why should we study the brain? First, most obvious reason, know thyself. Know what this thing is that's operating in our heads. This is who you are, is your brain. There are lots of very fine and important organs in the body, but the brain is special. So, a heart is important. You'd die without it. But it's the brain that's your identity. There's a reason that surgeons do heart transplants. That makes sense. Something wrong with your heart, you need another heart, OK. But why don't they do brain transplants? That wouldn't make sense. If there's something wrong with my brain, it doesn't make sense to take someone else's brain and put it in here, because then I'd be that other person. It doesn't make sense, because the brain is who you are. So the brain is really special. It's not just another organ. That's why, a few years ago, we had the decade of the brain-- not the decade of the pancreas, or the liver, or the kidney. People need to study these things. They need to know how to fix them. They're important. But they're not as cosmic as the brain. Second reason why we should understand brains, and that is to understand the limits of human knowledge. The more we understand about the human mind, the more we can actually evaluate how good our knowledge is. Are there things that we might not be able to think? Possible through scientific theories we might not be able to understand, ever? You can think of studying the mind as a kind of empirical epistemology, a way to actually know about the knower so we can figure out how good the knowledge is in that knower. That's another reason. A third reason is to advance AI. Up until a few years ago, I used to give lectures on vision, and they would all start with some version of this. You guys all have amazing visual abilities in the back of your brain that does vision. You can do all of this incredible stuff that no machine can touch. Hats off to you. You have an amazing visual system back here, and those guys in AI-- it is mostly guys. Guys, gals, whatever. Those people in AI could only dream of coming up with algorithms as good as the one that's running in the back of your head. You can't quite start the lectures that way anymore. If any of you have been living in a cave and not heard about deep nets, there's been a massive revolution. And all of a sudden, deep nets are doing things that are really close to human abilities, particularly in vision. For example, in visual object recognition, machines were way far behind human vision until very recently, especially when this paper here came out-- was published in 2012. First author, Krizhevsky. It has now been cited an astonishing 33,000 times. Actually made this slide a couple of weeks ago. It's probably been cited 36,000 times by now. You could look it up on Google Scholar and find out. That is a huge number of citations. The influence of this paper is ginormous. Probably half of you have already heard about this paper. Raise your hand if you've heard about this paper. Oh, OK. All right. Major, big news. What's so important about this paper? Well, they trained-- as, probably, most of you know-- they trained a deep net on the over 1 million images in ImageNet, a massive computer database of images. And they basically taught it to do object recognition. And it performed much more accurately than any previous system, and it approaches human abilities. This is major. This is a radical change in the situation that we were in five years ago. Things have changed radically. Just as an example, here's one of the figures from that seminal paper. Here is one of the images from ImageNet that AlexNet, this trained network, was tested on. And the correct answer, according to ImageNet, is that that's a mite. And here's what AlexNet says. Its number one first answer is mite, and its second, third, fourth answers are black widow, cockroach, et cetera. Pretty damn good. The mite is even sticking off the edge of the frame, and it gets it. Container ship. First choice, container ship. Pretty good. Second choice makes sense. Lifeboat. Not bad. Look at that-- motor scooter. I can barely even see the motor scooter in there, but AlexNet, awesome. Right? Leopard. Awesome. Even when AlexNet makes a mistake, the mistake is totally understandable. Like, according to ImageNet, that is a picture of a grille, and AlexNet calls it a convertible. I'm siding with AlexNet on this one. This, the correct answer is mushroom, and AlexNet says agaric. I had to look that up. It's a particular kind of mushroom. This one's pretty funny. ImageNet says that's pictures of cherry. There's cherries in the foreground. But AlexNet says dalmatian. I'm siding with AlexNet on this. And Madagascar cat, et cetera. Pretty amazing. And nothing even close to this was possible before 2012. This is very recent history, and it has totally shaken up the field in lots of ways. That's been transformative not just for computer science, but it's also been transformative for cognitive science and neuroscience. Because now, we have algorithms-- like, here's this deep net, and it does this thing. That's a possible theory of how humans do it. It's a possible, computationally precise theory of what's going on in here. And we didn't use to have those, and now we have those for a number of domains. And that's shaking up the field. There will be a whole lecture on deep nets and how you can use them to think about minds and brains toward the end of the course-- guest lecture by my postdoc Katharina Dobbs. And we'll hear more about that. But let's first step back a second and say, OK, do they really perform as well as humans, even on just object recognition? Well, what if we tested it on images not in ImageNet? ImageNet is a pretty good test because these things, as you can see, are highly variable. They have backgrounds. They're complicated. They're real-world images. But they were photographs taken by people in a particular way, with a particular goal. And most of the photographs you take, you throw out. They don't end up in ImageNet. ImageNet is a weird little idiosyncratic subset of the kind of visual experience that we have. So would this really generalize? It so happens that Boris Katz and Andrei Barbu, across the street in CSAIL, have been doing some very interesting studies. This stuff isn't published yet, but I got their permission to tell you about this cool stuff they're doing. And they're saying, hey, let's test AlexNet and other similar deep nets since then on a more Realistic, harder version of object recognition that's more characteristic of what humans do. They're generating this huge data set of stimuli that they crowdsource. Workers on Mechanical Turk go on there and create images for them. They get instructions like, hold an object in this particular location, or at this angle, or move it here, and send us the images. They are getting, I think, hundreds of thousands of images to test this on. And they're much more variable in the location of the object in the image, and its orientation, and so forth. For example, you guys have no problem telling what that thing is, but it's a slightly atypical example. Likewise, what's the object on the floor there? You can tell what it is, but it's a slightly atypical example. What Boris and Andrei are finding is that human performance is still pretty good on these images, but the deep nets are terrible at this stuff. ResNet, one of the more recent ones, drops from 71% correct on ImageNet to around 25% correct on these images. And the other similar, fancy, more recent networks, do similarly badly. On the one hand, AI, the deep nets, are awesome and transformative. No question about it. But on the other hand, despite all the hype, they're still not quite like human object recognition. They're a whole lot closer than they used to be, but they're not really there. And more generally, what about harder problems, like image understanding-- not just labeling and classification, but understanding what's going on in the image? You guys have probably seen image captioning bots. There are lots of these around now. This kind of hit the scene in 2016, when Google AI came out with a captioning algorithm. And of course, right around the same time, Microsoft had a captioning algorithm. And let's see how they do. This is an example. You give this algorithm this picture here, and it says, that's a dinosaur on top of a surfboard. That's pretty damn good, right? OK, wow. Let's look more generally, how well this thing works at other examples. It looks at this and it says, that's a group of people on the field playing football. Like, wow. OK. A snow-covered field. Pretty good. Liu Shiwen and Ding Ning posing for a picture. I don't know, but these things are very good at face recognition. That's probably exactly those two people. A car parked in a parking lot. Pretty good. A large ship in the water. Pretty good. A clock tower lit up at night. Awesome, right? A vintage photo of a pond. Well, the vintage part. I don't know where the pond is. There's a little water in there. I don't know. Not way off. A group of people that are standing in the grass near a bridge. Not really. There's grass. There's a bridge, sort of. There's people. But not really, right? A group of people standing on top of a boat. Definitely not. A building with a cake. What? A person holding a cell phone. Not. A group of stuffed animals. I love this one. A necklace made of bananas. Wow. We've really landed on Mars here. A sign sitting on the grass. Talk about missing the boat. Now, look at this picture for a second. Just figure out what's going on here. Takes a couple of seconds. Everyone got it? There's a lot going on here. This algorithm says, I think it's a group of people standing next to a man in a suit and tie. And the algorithm is correct, but the algorithm has profoundly missed the boat. I'm channeling-- actually, I stole these slides from Josh Tenenbaum. But let me channel him for a moment and say what his big idea is, which I think is really important. And that is that both humans and deep nets are very good at pattern recognition-- pattern classification. This is a cat, or a dog, or a car, or a toaster. What they're not good at-- what humans are good at, but the deep nets are not, is building models to understand the world. When you look at this picture, there are all kinds of things that are crucial for really understanding, at a deep level, what's going on in here. We need to know why some people-- what some people here know, but the guy on the scale does not know. Namely, even if you don't recognize that that's James Comey-- I think it is-- here's Obama with his foot on the scale. You need to know that people find it embarrassing if they weigh too much. You need to know that he can't see that Obama's doing it. You need to know that they can see it, even though he can't, and that's kind of the essence of humor. There's just a whole universe of rich structural information going in here that is part of what it means to understand this picture. And no deep net is even close to doing that kind of thing. Bottom line of all this is-- or let me just go on more generally-- AI systems can't navigate new situations, infer what others believe, use language to communicate, write poetry and music to express how they feel, or create math to build bridges, devices, and lifesaving medicines. That's a quote from our leader, Jim DiCarlo, head of this department, published in Wired a year ago in a beautiful article on the limitations of deep nets. But more generally, the point is that, yes, AI is taking a massive leap now. We're right in the middle of it, and it's super exciting, and it's helpful to neuroscience and cognitive science. But AI has a lot to learn from us too-- a lot to learn from what's going on in here, and how this thing works that those AI systems still can't touch. All of that was my third reason for studying-- we're still in the, why are we studying the human brain? The fourth reason to study the human brain is the one most compelling to me, and that is that it is just simply the greatest intellectual quest of all time. We could fight about cosmology. I'm not going to fight with you about anything else. I don't think there's any contest. It's the greatest intellectual quest of all time. And that's why I'm in it, and that's why I hope it'll be fun for you. That was the why. How are we going to study the human brain? Here's this thing. How are we going to figure out how it works? Kind of daunting, not totally obvious. The first thing to realize is that there are lots of levels of organization in this thing, and hence, lots of ways of studying it. We could look at molecules and their interactions. Lots of people in this building do that. We could look at properties of individual neurons. We could look at circuits of neurons interacting with each other. We could look at entire brain regions and what their functions are. We could look at networks of multiple brain regions interacting with each other. All of those things are possible. But actually, what we're going to do in the course is none of those things in particular. Instead, we're going to ask a somewhat different question. And that question is, how does the brain give rise to the mind? And to understand that question, we're going to do more at this level, and less at the upper level. To answer this question, we need to start with the mind. We need to-- if we're going to understand, how does this thing produce a mind, we need to first figure out, what is a mind? What do we know about minds? We need to start with the various mental functions that minds carry out-- things like perception, vision, hearing, aspects of cognition, like understanding language, thinking about people, thinking about things, et cetera. For each mental function, what we're going to do in here is start by trying to understand how it works in minds as well as we can, or what it is that we're trying to understand that minds can do. What is computed and how? And then we're going to look at its brain basis and try to figure out what we can figure out about how that mental function is implemented in a brain. The first question we'll ask for all of these domains is, is there specialized machinery to do that thing? And then we'll ask, what information is represented in the relevant parts of the brain, and when is that information represented, and how? How are we going to answer those questions? Well, there's lots and lots of methods in our field. The first set of methods-- if we want to understand minds, the first set of methods are the basic stuff of cognitive science, psychophysics. That means showing people visual stimuli, or playing them sounds, and asking them what they see or hear. Nice and low tech, but lots has been learned from those methods. You collect reaction time and accuracy, and it's amazing how much you can learn from these methods that have been around for a hundred years or more. Perceptual illusions are similarly very informative about how minds work. Now, let me say an important thing that arises here. Last year was the first time I taught this course, and I would say it went so-so. I'm aiming for it to be much better this year. And one of the ways I'm trying to do that is to be responsive to the student evals I got last year, which were not fabulous across the board. Hurt my feelings badly. But once I got over myself, I decided to just listen to them and try to fix it. And one way to fix it is to be honest with you today about what this course is going to cover. In my evals, student 50458, bless them, offered this comment. "This class was not sold in the correct way. It should not be called the Human Brain, because it was basically just a cognitive science, not a brain class. I expect it to learn very different material." I don't know who this student is. I wish I could apologize to them. But I will say to you, sorry, student 50458-- sorry I didn't make that clear. The fundamental the reason the brain is cool is that gives rise to the mind. And that means that studying the biological properties of the brain without considering the mental functions it implements it would be kind of like trying to study the physical properties of a book without considering the meaning of its text. We're going to spend a lot of time doing cognitive science in here. And if you had a different impression, sorry about that. But that's what we're doing here. How are we going to answer this? Lots of cognitive science. How are we going to look at the brain basis? Well, we're going to look at neuropsychology patients-- people like Bob who have damage to the brain and what functions get preserved and lost. We'll look at a lot of studies with functional MRI. Neurophysiology, where you can record from individual neurons in animal brains, and in rare cases, even in human brains-- under clinical situations where they need to have electrodes in their brain anyway for neurosurgery. We will look at EEG recorded from electrodes on the scalp and MEG recording from magnetic fields from squids placed next to the scalp. We'll look at connectivity measures with a method called diffusion tractography, et cetera. Lots of methods. Which mental functions will we cover? Well, to tell you about that, I need to tell you about the huge progress that has happened in our field in the last 20 years. All of this is quite recent. Let's back up to 1990. Here is approximately what we knew about the organization of the human brain in 1990. The black ovals are the bits that are primary sensory and motor regions that have been known for a long time, even by 1990. And the colored bits are the bits where we had some idea that face recognition might go on somewhere in the back end of the bottom of the right hemisphere because of people who had damage back there and lost their face recognition ability-- sometimes, preserving their ability to visually recognize words and scenes and objects, only losing their ability to recognize faces. The language regions we had known about for nearly 200 years, from Broca and Wernicke and others, who had studied patients with damage in those regions and noted that they had problems with language function. And similarly, many people had reported that if you have damage up here in the parietal lobes, you sometimes lose your ability to direct your attention to different places in the visual scene. That was approximately what was known in 1990. And here's what we know now. We now know, thanks largely to functional MRI, that for dozens of regions in the brain, in every one of you, we have a pretty good idea of the function of that region. This is major progress. This is a kind of rough sketch of the organization of the human mind and brain that we have now, that we didn't have 20 years ago. And that's awesome. That has made possible a lot of progress, building with other methods. What we'll study in this course is, we'll focus on those mental functions where the brain bases are best understood. And that will include things like the visual perception of color, shape, and motion, visual recognition of faces, places, bodies, and words-- and scenes. Didn't make it on the slide. Oh, yes, it did. Perceiving scenes and navigating. Understanding numbers. Yes, there's a whole lot about the brain basis of understanding numbers. Perceiving speech and perceiving music. Understanding language. Understanding other people and their minds. Those are the kinds of topics where there's been a lot of progress recently in understanding the brain basis of those mental functions. Those are the ones we'll focus on. And that means there's going to be a lot on perception, high-level vision and high-level audition, because that's one where a lot of progress has been made, and it's also a lot of the cortex. As I mentioned a moment ago, the whole back part your brain does vision, construed broadly. Some people might say, well, why is she spending all of this time in vision? Well, it's a big part of what your brain does. We are very visual animals. So we'll spend a lot of time on vision. For each of these functions, we will ask, to what extent is this mental function implemented in its own specialized brain machinery? Are there multiple different brain regions that carry out that function? What does each one do? Is there a division of labor between those different regions? How does that system arise in development? Does it have homologues in other species? Are these things uniquely human, or which of them are? And also, along the way, other side cool questions that will come up. What, if anything, is special about the human brain? How come we are taking over-- and largely destroying-- the planet, and other species are not? Besides destroying the planet, we're doing some other cool things, like inventing science, and engineering, and medicine, and architecture, and poetry, and literature, and all of these other-- and music-- all of these other awesome things that other species aren't doing. How come our brains are doing that and other species aren't? Where does knowledge come from? You guys know all of this stuff. How much of that stuff was wired in at birth and how much of it did you get from experience? How much can our minds and brains change over time? Can we go study a new thing and get a whole new brain region for that thing? Can we change the basic structure just by training, or after brain damage? Can we think without language? How many of you have wondered about that question? Yeah, really basic question. Anya is answering it. Anya and some others. But Anya is doing a lot to answer that question. There are actually empirical answers to these long-standing, deep questions that everyone wonders about. That's pretty cool. Somebody back there asked a while ago about awareness. Can we think, perceive, understand without awareness? How much can go on in the basement of the brain when we don't even know what's going on down there? We'll consider all of these other cool questions. There's a bunch of things we won't cover in this course for various reasons, that could have been in here and just aren't. There's only so much time. Motor control. It's really important to know how you do things like pick up objects and plan actions. And we're just not covering that. Something had to go. Subcortical function. This is a very corticocentric course. Most of the course will deal with the cortex. That's where most of conscious thinking and reasoning and cognition happens. There's a lot of good stuff down in the basement of the brain, and it's going to get pretty short shrift. Not for any good reason-- just what it is. Decision-making. Important field, not getting much coverage in here. Importantly, circuit-level mechanisms-- explanations of cognition. If you think that we're going to understand not only what it means to understand the meaning of a sentence, but that I'm going to give you a wiring diagram of the neurons that implement that function, sorry to be the bearer of bad news, but nobody has a freaking clue how you could get a bunch of neurons to understand the meaning of a sentence. That's exciting. That means there's a field for you guys to waltz into. And probably, in your lifetimes, people will start to crack these things. But just to know what we're headed into, rarely, for almost no high-level mental functions, do we have anything like a wiring diagram-level understanding of any perceptual or cognitive function. That's not in the cards for this course, because it doesn't exist in the field. For that kind of thing, there are cases where you can make progress. You can understand, say, fear conditioning in a mouse. Those circuits are being like cracked wide open by people in this building, people all around the world, with spectacular precision. They know the specific classes of neurons, their connectivity. They know every damn thing about them. But it's like, how does a mouse learn that this thing is-- to be afraid of this thing? OK, that's important. But for more complex aspects of cognition in humans, we can't usually have that kind of circuit-level understanding. Lots of other things that will get short shrift. Memory, not for any good reason. I mean, there's a lot of coverage of memory in 900 and 901, and it's just somehow off a blind spot for understanding-- for knowing how to talk interestingly about memory. So I'm not going to give you a boring lecture on memory. Instead, I'm not going to give you any lecture on memory until I learn how to talk about it interestingly. Reinforcement learning and reward systems. I'm going to try to pull some of that in, but it's not going to be a major focus, even though it's a really important part of cognition. Attention. There might be some at the end. How many of you have taken 900? Looks like a little over a half. How many have taken 901? Yeah, a little over half. OK, good. If you have, great. Good for you. This course is designed as a tier two course for people who have taken 900 or 901. If you haven't, you're probably OK, but you might need to do a little extra work. I've already posted online, and in the syllabus, information about, actually, a lecture I gave a year ago on some of the background stuff that is no longer taught in this course. People hated it when I taught them stuff they'd already encountered before, so I'm trying to minimize that. That's a backup for those of you who haven't taken these courses. If you're worried about this, chat with me afterwards. I think it will be OK, just count on doing a little bit of extra work-- not much. For those of us who have taken it, there's going to be a little bit of overlap. It's simply impossible to have zero overlap. I mean, what does John Gabrieli in 900 and Mark Bear in 901 do? They survey the whole broad field, and they pick the coolest stuff out of every little bit, and they teach it to you, exactly as they should. But that means that when I come along and try to say, I'm going to do a more intensive coverage of the coolest things, there's going to be a teeny bit of overlap. But I'll try to not make it too much-- just because the coolest stuff is the coolest stuff. Also, the spin and the goals of this course are quite different from both 900 and 901. You will have to memorize a few things, but not much. My real goal in this course is to have you understand things, not memorize a sea of disjointed facts. A little more on the goals. Really, what I want you to get out of this course is to appreciate the big questions in the field and what is at stake theoretically in each. I want you to understand the methods in human cognitive neuroscience, what each one can tell you, what it can't, how different combinations of methods Can work synergistically and complementarily to answer different facets of a question. I do want you to gain some actual knowledge about some of the domains of cognition where we've learned a bunch, both at the cognitive level and the brain level-- things like face recognition, navigation, language understanding, music, stuff like that. And crucially, I want you guys to be able to read current papers in the field. If you look in the syllabus, the first few papers are, like, 20 years old, but it's going to accelerate quickly and you'll be reading papers-- I'm trying to choose mostly papers published in the last year or two. I'm trying to take you straight to the cutting edge of the field. Yeah? AUDIENCE: Are the papers going to be straight out of research labs, or are they going to be, like, the annual review [INAUDIBLE]? NANCY KANWISHER: No, straight out of research labs. You're going to read the real deal, not someone else's blurry, they just read the abstracts and put in some stuff in the review article. No, you're going to read the actual paper. That's the whole deal. Those are the goals. Good. A few things. Why no textbook? This field is moving too fast for a textbook. Plus, I have strong opinions, and I don't like any of the textbooks. Any textbook is out of date. We're going to be reading hot stuff that's hot off the press, and so that's not in the textbooks yet. And so we're skipping that, and you're going to go straight to original research articles. There will be occasional review articles where relevant, but mostly, part of the agenda of this course is to teach you to be not afraid of and able to read current articles in the field. All right. You've all been waiting for this. Details on the grading. Pretty standard. Midterm, 25% of the class-- of the grade. Final, 25%. It will be cumulative, but weighted toward the second half. There's going to be a lot of reading and writing assignments, approximately two papers to read per week. And for, usually, one of those papers per week, you will have a very short written assignment in which, usually, I ask a few simple questions and maybe one paragraph-level think question. The essence of these tasks is not the written assignment itself. The essence of the task is to understand the paper. If you understood the paper as you read it, then you should be able to answer those questions pretty straightforwardly. And let me just say that understanding a scientific paper is not trivial. When I write a scientific paper, right in my area, where I have all of the background, it takes me hours-- hours. It may be five pages. It still takes me hours. It's just how it is. So when I assign a paper and you say, oh, it's only three pages, I could do that in 20 minutes. Oh, no, you can't. No, you can't. And that's part of what I want you to learn how to do, is how to really read and understand the scientific paper. Allocate the time it takes to really get it. That's a big part of the agenda in this task. All of the stuff-- the assignments and the submission of the assignments-- will all happen on Stellar. Your first written response to a paper is due February 12 at 6:00 PM on Stellar. But there are other readings that are assigned before that. A note about the schedule. I struggled a lot trying to both have the assignments happen when you had already learned enough in lectures to know how to do it, but have it close enough to the topic at hand so it didn't seem, like, no longer relevant. It's hard to do both of those things. So the compromise is, all of the assignments are due at 6:00 PM the night before the class in which they're assigned. If you see that it's assigned on the 13th-- if it's listed on the lecture for the 13th, check carefully. It's probably due the night of the-- I'm getting this wrong-- the 12th. The night before. And that's so that we and the TAs can look at it, figure out what you understood, what you didn't, and how to incorporate and explain whatever you didn't get in the next lecture. All right. Quizzes. I haven't done this before. New thing I'm going to try. There are going to be about eight of these. They're going to be very brief. They're going to happen at the end of class, in class. And you will do them on your computer or your iPhone using Google Forms. If anybody doesn't have a computer or an iPhone they can bring to class the days of those quizzes, let us know after class and we'll come up with a solution. And the idea of these is not to fish out an obscure fact that was in one of the reading assignments and ding you on it. I'm not interested in that. The goal of this is just to keep you up to date, keep you doing the readings, keep you up with the material. And if you basically are understanding what you're reading and understanding the lecture material-- maybe you glance at it briefly before-- you should do fine on the quizzes. They're just kind of reality checks for us to know what people are getting and not. First quiz is February 20, blah, blah. There is one longer written assignment that is not due with the usual schedule, with all of the other things, to do near the end. And in that one, you will actually design an experiment in a particular area. And that will be-- I don't know yet-- three to five pages, something like that. We'll give you more details on exactly how you want to organize this. And it will be very specific-- like, state your exact hypothesis, state your exact experimental design, et cetera. And you'll get practice with those things in advance. Those are the grading and requirements. And this is the-- you have this all in the syllabus in front of you. This is the lineup of topics. But very briefly, let me try to give you the arc of the class. So this is the introduction. Next time, we're going to do just a teeny bit of neuroanatomy. There will be a teeny bit of overlap with 900 and 901 there. I'm going to whip through it in very superficial form. I'm doing that largely because on the following class, we have an amazing privilege, which is that one of the greatest neuroscientists alive today, Ann Graybiel's, will be doing an actual brain dissection, right here in this class, right in front of you. It's going to be awesome. I can't wait. It's an incredible privilege. It will be a real human brain, and you guys will be-- Ann will be here with all her apparatus, and you guys will be clustered around. And if it's this many, god help us, but we'll figure out how to make it work. I may-- let me just say, if there are listeners in here, I may have to tell listeners they can't come, because very sensitive about not having too many people. Stay tuned on that. I haven't quite decided yet. It depends how many people are taking the class. But it's going to be amazing. And I want to remind you of just some basics so you're not asking her, like, what is the hippocampus? I should all know that, but we'll just do bare basics. And then we'll have the dissection. That will be great. And also, another thing to say is, I mentioned that the subcortical regions are going to get short shrift in this class. That's true. But a lot of what you see in the dissection is the subcortical stuff. Cortex is great but, it kind of all looks the same. You kind of can't say, oh, that's this region. That's the other region. Well, you can, but it doesn't look any different from any other region. That's where the subcortical stuff will happen. Then I'm going to do a couple of lectures that focus on high-level vision, perceiving motion, and color, and shape, and faces, and scenes, and bodies, and stuff like that. And we will use those both to teach you that content, and also to teach you vast array of methods in this field. We will then have a lecture on the kind of debates about the organization of visual cortex in humans. I have a particular view. I'm very fond of views that some patches of cortex are very, very functionally specific. Not everyone believes that. So I have assigned readings of people who have different views, and we will consider that. I will try to expose you to the alternate views and tell you why I'm teaching-- why I still believe mine, but why other smart people believe different things. We will then move up the system from perception, and we will spend two meetings talking about scene perception and navigation. You got a hint about what an interesting area this is from the story of Bob. We'll consider more what we've learned from studies of patients with brain damage, from functional MRI, from physiology in animals, from cognitive science, from the whole glorious menagerie of methods to understand navigation. It's a really fascinating area. In the two lectures after that, we'll consider development. How do you wire up a brain? How much is present at birth? What is specified in the genes? What is learned? And a lot of that will focus on the navigation system and the face system, simply because that's where there's a lot known. We'll consider some other things, but those are two areas where there's super exciting work from just the last three or four years. That's what we'll focus on there. I'm then going to do a lecture on brains in blind people. How are they different? How are they the same? What does that tell us? And then you have the midterm. Then we're going to move on and consider number. How do you like instantly know that that's three fingers and that's two without having to do anything all that complicated? And if I had 25 fingers and held them up, you would immediately get a sense that it was about 25. You might not know if it was 22 or 28, but you would know it was about 25. And there are particular brain regions that compute that for you. And we will consider all of that. And there's a very rich array of information from studies of infant cognition, from animal behavior, from brain imaging, from brain damage, from single-unit physiology, and from computation, all of which inform our understanding of number. Those are my favorite lectures, where we can take one domain of cognition and inform it with all of the methods. And numbers are a really great example. Then we'll talk a little bit about-- one of my TAS said, call it neuroeconomics. That will sound good. But actually, what I'm going to try to do is sort of neuroeconomics. But it will be about pleasure, and pain, and reward, and how we think about those things. And then that's down to April 3. Just as a side note, all of these things are things that are pretty similar between humans and-- at least primates. And some of them are shared with rodents. And most of the things after that are things that are really uniquely human. We'll be really moving away, with less available animal literature to inform the stuff we're looking at, because animals can't do these things. And so necessarily far from the details of individual neurons and circuits, but there's still lots cool that can be said about how you understand speech, how you appreciate music. There will be a guest lecture, just for fun, on brain-machine interface by Michael Cohen, who's working in my lab now, and who has a great lecture on this topic. Then we'll spend a couple of lectures on language-- how you understand and produce language, and what the relevant brain regions are, what we know about it from cognition, and lots of other methods-- and what the relationship is between language and thought. Then we'll think about how we think about other people. This is called theory of mind-- how I can look out at this lecture and try to evaluate from your facial expression. Are they bored, sleepy, overworked, fascinated, excited? All of this kind of stuff that all of us do moment-to-moment in any conversation, and that, yes, lecturers are doing all of the time, even if I know that you guys have too much work, and that's why you're sleepy, and I shouldn't take it personally. I'm still noticing. Anyway, then we'll go on and consider brain networks. Of course, brain function doesn't happen in just a single region, even if we spend a lot of time studying individual regions. There's considerable work trying to figure out which sets of regions work together, and how could we discover that, and what are those broader networks of brain regions? And then on May 6, you will have turned in your longer written assignment designing your own experiment. And then on May 6, we will work together in groups to refine those experiments and really hash out the details so you actually know how to design an experiment. And then we will have this guest lecture from my postdoc, Katherina Dobbs, on deep nets and what they can tell us about cognition and brains. And then we'll talk about attention and awareness. And then I'm not totally sure what we're going do in the last class, but what I'm voting for is that the amazing TAs each give a short talk on the cool stuff they're doing. But that's under discussion. OK, that's the arc of the class. Questions? All clear? Great. Well, if I have five more minutes, maybe I'll do one other little thing. Let me try this. You asked-- I'm going to try to learn everybody's names, but I'm not doing that yet, because some of you might not show up and I will have wasted a whole piece of my brain encoding it. I'm just kidding. But anyway, I remember that you asked, are you going to read current papers? Yes, it is-- and you're right. It's daunting. But let me just say a little bit about how to read papers. This is not a stats course, and we haven't prerequisited stats. Neither is it a course on the physics of MRI. There will be parts of every MRI paper that have a lot of gobbledygook. We scanned with this scanning procedure. We used this kind of scanner and this kind of blah, blah, blah. Lots of gobbledygook. You guys don't need to worry about that. About the stats, it's kind of a judgment call. Everyone in here should have an idea of what a P level means, and I hope, a sense of what a T-test is and an ANOVA. If you don't, I should probably tell you that offline, because that's pretty basic. And what a correlation is. Beyond that, just use your intuitions about those things. And this is not of course about understanding the details of the stats in each experiment. There just isn't room to cover all that in the substance of the studies, as well. When you read a paper-- for example, here's a paper-- a very old paper. You come across this, and it was like, OK, here are all these words. And it goes on for 20 pages. And how do you even dig in? Well, the way to dig in is to start by saying, what question is being asked in this paper? If the paper is well written, you'll be able to find that in the abstract. Blah, blah, blah, to study the effect of face inversion on the human fusiform face area. We'll talk about that more later. But if you fish through the abstract, you should be able to find what question is being asked. And it's the first thing you should figure out about a paper. You don't necessarily read a paper start to-- beginning through the end. I think it's better to start with this list of questions in your head and look for the answers to those questions. Second question. What did they find? If the abstract is well written, you can find that in the abstract as well. Signal intensity from the fusiform face area was reduced when grayscale faces were presented upside down. Kind of boring, but there it is. That's the finding of this paper. What is the interpretation? In other words, who cares? Why-- who cares about this? If you look in here, in the abstract, FFA responds to faces per se rather than to the low-level features present in faces. We'll talk more about what that means. You guys have an assignment about that-- probably, several assignments about that kind of question. Next question you want to ask yourself is, what is the design of this experiment? Often, for this, you have to go beyond the abstract. And I should say, for even these earlier questions, sometimes you won't find them in the abstract. That just means the abstract is not well written. But that exists. To get the design-- like, what exactly did they do? Usually, you have to fit in what exactly was done, and how were the data analyzed? You need to fish farther. You need to fish around other parts of the paper. And of course, all of those questions-- I just said, what question-- I circled this part. But there are many levels to one question. You can get more on, why is that inverted question important? You look through, usually, in the introduction to the paper. Does the FFA respond to faces per se, or to a confounding visual feature which tends to be present in faces? Second, is it true that inverted faces cannot engage face-specific mechanisms? Blah, blah, blah. That gives you a little more background on what the question is. There are different levels of depth. These are all things you want to be looking for when you read a paper. What exactly was done? We measured MRI responses in the FFA to upright and inverted faces. I don't expect you to understand all of this. These are just giving you, schematically, how you proceed when you're reading a paper. More on the interpretation, or who cares? This result would show that face-specific mechanisms are engaged only or predominantly by upright faces, blah, blah, blah. You can fish through for those things. The point is to have those questions in your head when you read a paper. It's much more easy and engaging to read something if you have an agenda when you read it. Your agenda, in reading scientific papers, is to answer those questions for yourself. More stuff. What was the design and logic? Often, that's deep in the methods. You have to fish around and find it. There will be some set of conditions and designs. We'll talk more about all this kind of stuff. What exactly was done, blah, blah, more details. And this is an example of the kind of gobbledygook that you can ignore. Subjects were scanned on a 1 and 1/2 T scanner. And there are all these-- here's an example of said gobbledygook. You can ignore this, in this class. Every method will have different kinds of gobbledygook. This is MRI gobbledygook. You can ignore it. It matters a lot, but not here. What else? How are the data analyzed? If you look in the-- sometimes, there's a data analysis section, or a results section, or a methods section, that will tell you. You can find that, figure it out. What was the finding? Here's more on the finding. Again, you just fish through for these things. The point is just, when you're reading a paper, it's not necessarily-- what I do is, I read the title, I read the abstract. And then I start answering those questions for myself. And sometimes, at that point, I'm skipping to figures. I'm skipping to methods. Any of that is fine. Don't feel like you need to understand each word, especially deep in the methods. I don't know. Was that helpful at all? We'll try it, and you guys will give me feedback, and if it works, great. And if not, we'll do more on how to read papers. All right, it's 12:25. See you on Monday. |
MIT_913_The_Human_Brain_Spring_2019 | 5_Cognitive_Neuroscience_Methods_II.txt | [SQUEAKING] [RUSTLING] [CLICKING] NANCY KANWISHER: All right, it's 11:05. Let's get started. So the agenda for today, we're doing this whole thing on the methods in human cognitive neuroscience. And I'm illustrating those methods with the case of face perception. Not just because I'm into face perception, but it's a particularly rich domain of research where there's lots to say about it from all these different methods. And so last time, we talked a bit about applying Marr's computational theory level to face perception. We talked a teeny bit about some behavioral data and a little bit about functional MRI. What I'm going to do today is quickly zoom through a speeded-up review of those things, and then we're going to get to some of these other methods. And there's a quiz at the end. All right? OK, so methods in any field of science are just there to enable us to answer scientific questions. They're not to impress our friends with all the fancy things we know how to do or our colleagues. They're just to answer questions. And so you always have to start with the questions. And so last time, I listed a bunch of questions. Not all of them, but a bunch of questions one would really want to know about face perception if we were to understand how it works in the brain. And last time, we focused on these first three. So let me just do a super quick review. The questions at the level of Marr's computational theory, we ask, what is the problem that's being solved and why is that important to the organism? What is the input, what is the output? How much you get from that input to that output, right? So for the case of face perception, here's a very simple version of it. Here's an example of the input. It goes in, hits the retina. The stuff that we want to understand happens in here, and you have an output. OK, so just even thinking about it that way, we can already just see, with common sense, that one of the big challenges in solving this problem is that faces look different every time you see them. The lighting changes, the orientation of the face changes, the hair changes, the mood changes, all this stuff happens. People put on makeup, they shave off their facial hair, they do all these things to make it a big challenge to recognize faces. And yet, we manage really well. So how do we do that? Well, our field has many methods to address this question. Last time, I talked about one little example of a behavioral study-- simple, cognitive psychology study measuring behavior-- where we showed that the way people solve this problem is fundamentally different with people they know well and people they don't know well. So I showed an example that all of you presumably would have no trouble determining that those are all pictures of the same person, even though at the pixel level they're wildly different. And yet, you have a hell of a time saying which of those images are of the same people and which aren't. And so the point is that our ability to extract this invariant representation, that is to figure out abstractly who is that, is really-- well, to figure out that any of these images are the same as each other is much better for familiar than unfamiliar faces. And that means we don't have a perfectly general ability to take any face and abstract out this completely image-independent version of it. That's what invariant representation is. Yeah? AUDIENCE: For the case of the Dutch politicians, did they ever do the study on people who were super recognizers? NANCY KANWISHER: I don't know about that, but they did do it on people who are professional TSA-type people. AUDIENCE: OK. NANCY KANWISHER: Right? And I'll tell you guys about that later. But you could think about whether you think it might work better with those people or not. OK, everybody get this general point here? All right. So I skipped over another simple behavioral finding last time that I want to mention now. And that is an extremely low tech-- charmingly low tech, and yet, I think very powerful-- discovery about face perception. One of the most important original bits of evidence that face perception might be a different thing in the brain came from a PhD thesis in this department by a guy named Robert Yin. And he used the extremely high tech equipment of a stopwatch and paper. OK, so what did he do? He presented faces to people upright. And he said, study these 20 faces. And then he tested them later. Did you see this face? Did you see this face? Did you see this face? And then he did the exact same experiment on a different set of faces, but they were all upside down. Studied upside down and tested upside down. And what did he find? He found what's known as the face inversion effect. Namely, people do much worse at this task when the faces are upside down. Here's errors for inverted upside down, errors for upright at this task. Even though, importantly, they were studied and tested upside down or studied and tested inverted-- upright. OK, everybody got what this shows? OK, so that's cool that this face inversion-- but the further cool thing is he showed that this face inversion effect is greater for faces than for other kinds of stimuli. So he tested lots of other things, including houses and stick figures. And he showed that that cost, when you turn the stimuli upside down, is greater, that difference is greater for faces than for other classes of stimuli. So what that suggests is that face recognition may just work differently in some deep way from recognition of other classes of stimuli. And Robert Yin actually inferred in his PhD thesis-- way, way back before any imaging method-- that maybe there are special parts of the brain for face recognition. And maybe face recognition is just a totally different thing, that's why it is more affected by inversion than recognition of other kinds of things. Was there a question back there? Yeah. AUDIENCE: I was going to ask, could that just be because faces are much more complex than houses or stick figures and that-- NANCY KANWISHER: Good question. Hang on to-- AUDIENCE: --backwards. NANCY KANWISHER: Good question. That's a very good question. And many people have tried to grapple with that. And actually, about 10 years ago, the idea that this disproportionate effect for faces was standard textbook, completely accepted. And now there's another round of people doubting it with other kinds of stimuli. So it's kind of ongoing. It's a very robust difference, but to say exactly what it is about face stimuli versus other kinds of things that is responsible for that difference, you can imagine it's subtle. For the purposes of this course, I'm trying to not quite lie to you guys, but give you the most standard view without freighting you with every possible objection to every little thing. Because pretty much every finding, there's somebody who has a beef with it. We'll tell you, that's not really true because blah-di-blah. OK? So yes, there's a little bit of debate going on about this right now. But for the purposes of this course, it's pretty damn rock solid, at least as an empirical result. All right. So there's in fact lots of versions of the face inversion effect. One you may have seen before but which is very amusing. If you look at faces like this that are upside down, they look sort of normal. But then if you rotate them, you realize there's something deeply weird going on. So the point is, you're much more sensitive to those grotesquely distorted faces when you see them right side up than when you see them upside down. So that's another version of the face inversion effect, and there are many, many incarnations of this effect. You'll see another one later in the lecture. So where did we get last time with these questions? We got that one of the major, if not the major central challenge in face recognition at a computational level, is the fact that we deal with huge image variation each time we see a face. And yet, somehow we're able to grapple with it. So to understand how face recognition works will be to understand, what is the code, ultimately-- nobody knows right now-- but what is a code running in our heads that enables us to do that? What is our mental representation of a face that enables us to deal with this problem? By looking at behavioral data, we got some evidence from the Dutch politician study that whatever that representation is that we extract from faces, it's not independent of the particular image. It's not that we have some platonic ideal of the face that we can extract from any face that lands on our retina, platonic ideal of that person's face, right? So whatever we're doing, it's not completely invariant, because we can't do that so well with unfamiliar faces. Also, as I just showed you-- related, but not exactly the same point-- our mental representations of faces are very sensitive to the orientation of the face more than our mental representations of other classes of stimuli. So those are just very simple insights about whatever our representations of faces are in our heads, just from simple behavioral data. OK, so let me just review some of the strengths and weaknesses of simple behavioral methods. Strengths are, they're good for characterizing the internal representation, right? Not with huge computational precision, they're more like with gisty kind of ideas. They're not very invariant, they depend on the orientation, right? That's not very precise, but it's a whole lot better than nothing. That's what I mean by at least qualitatively. They're good for disassociating mental phenomena. So you've already seen that when the inversion effect, it happens more for faces than other things. So that already starts to tell us, OK, maybe whatever the code in our head is that we use for face recognition, maybe it's pretty different than the code that we use in our head to recognize objects. OK, it's also cheap. It's really cheap. Much cheaper than all the other methods. OK, weaknesses-- behavioral methods alone don't have any relationship to the brain, at least without doing extra work. And it's not that they're useless until you link them to the, brain it's just that the brain is a whole source of other data. And it's nice to link them, because then you can connect with all those other data. Also, behavioral data are pretty sparse. For the most part, you have accuracy and reaction time, and that's it. And that's just not a whole lot of data to work with. You have to actually be much smarter to be a behavioral cognitive psychologist, than you have to be a cognitive neuroscientist, where you have much richer data to reason from. Cognitive psychologists really have very, very clever designs because they're taking this extremely limited data and trying to pull out interesting insights about mental function. Another way of looking at that is, here's an eyeball and a bunch of processing going over stages and a response, right? With behavioral data, all you have is that response. But presumably, for most of the mental processes that go on in our heads, there are many different stages of processing where different things are going on. Computations tend to have multiple stages and unfold over time. And all we have is the output. So really, what we want to be able to do is characterize the whole sequence of processes. And it's not that you can't get insights about some of those intermediates from behavioral data, it's just much more challenging. So if we had a way to look at those things independently, wouldn't that be awesome? OK, so there's lots of ways to do that. And a particularly good one is functional MRI. So as I mentioned before-- I mentioned this very briefly-- this very early experiment that I did way back asking whether there is a region of the brain that's selectively involved in processing faces. And I'm going to put a slightly different spin on it from what I put before. It's the same experiment, same data, but I want to emphasize more the logic of the experimental design because you guys will be designing an experiment on a different topic due Monday night. That we're going to discuss Sunday night-- that we're going to discuss in class on Monday. So we start with a hypothesis that there's a region of the brain that's selectively responsive to faces. That's the hypothesis. The way we test it is to pop people in a scanner and show them faces and objects. The data that I showed you before is that this little patch of the brain-- remember, this is a horizontal slice, back of the head, left and right are flipped. So that little region in me is right about in there. Everybody oriented? OK. That region responds much more to faces than objects. Is that clear to everybody what that is? OK. So yes, you see that in most subjects. So yes, there's a bit that responds more to faces than objects. But now, let's consider the hypothesis that that region is really selective to faces per se. And the way you evaluate whether these data, how strongly these data support that hypothesis-- they're certainly consistent with it, but do they nail that hypothesis fully-- is to consider, are there any other alternative accounts we can think of that are consistent with these data and different from that hypothesis? Is that clear? It's really important. That's just the whole kernel of scientific thinking and evaluating evidence is asking yourself that question. Is there any other way we could get those data where that hypothesis wasn't true? And if so, you've got to grapple with it. So what you do next is you think up alternative hypotheses to the one you started with, that is different accounts of the same data. And so in our case, you guys suggested a whole bunch, I suggested a bunch. And then the next thing I showed you is that we can test those alternative hypotheses, at least these ones here, by first-- what we did was, I didn't really emphasize this before-- but we reran that experiment in a new bunch of subjects, each subject individually. We found in each subject the little bit that does this. We write down exactly where that is in that person's brain. Now that we found that region-- that's called a localizer run, because we're finding that region in each subject individually-- now we can ask it new questions. And so the new questions we asked it last time was to present faces and hands. And we found, oh, that region right there responds like this. So the key ideas here is that we can identify that region in each subject individually with a functional scan. The reason that's important-- which I'll carry on about in more detail later-- is that the exact location of that region varies from one subject to the next. So if we just grab the whole fusiform gyrus or the whole lateral side of the fusiform gyrus in each subject, we'll get lots of stuff that is that region and lots of cortical neighbors that's something else. And if we took the exact location of that region in my brain and registered it to any of your brains and said, OK, let's take the part of your brain that registers spatially as well as we can with mine, we're not going to exactly get the right bit. So to study that thing, we've got to first find it functionally. And then we can ask it new questions. Does that make sense? OK, if anybody's unclear about that, I have actually online talks that go through the whole logic of this in painful detail. And I'm happy to answer other questions about it later. OK, so I put the word conditions in red because somebody asked one of the TAs what a condition was. And that's not stupid, I should have made that clear. This is just experimental design gobbledygook that means any-- OK, what is the definition of condition? In an experimental design, you have things that you are manipulating and measuring. So in this case, we're manipulating the stimulus. And we're measuring the magnitude of response in the fusiform face area or in the brain. So what we're manipulating, in this case, is the stimulus condition. So that would be one condition, that's another condition, that's another condition. Does that make sense? OK, so for your experimental design assignment for Monday night, you will be designing one or more experiments. And you will be describing exactly what conditions you are going to test. Everybody clear on that? OK. All right, so these data enable us to rule out those hypotheses. And now what you could ask, OK, once you get more data like this, have you completely nailed that hypothesis? Is there just no way that hypothesis could be wrong now given these data and those data? And I'll let you percolate on that. There are ways it could be wrong, but you have to work harder to come up with them. OK, so skipping ahead, just to give you the gist. This field has been going on for a long time. And there are now many, many studies-- 100 maybe even, I don't know, God, maybe even thousands, I don't know-- studies of this region in which-- and so this is sort of a summary statement from a long time ago. In my lab, we've tested the response of this region to lots of different kinds of stimuli. With that same method, localize it in each subject, measure its response when people look at that kind of stimulus, And so what we know now was that this region is found in roughly the same location in pretty much every normal subject. It responds more to faces than to any other kind of stimuli anyone has ever tested. Let me just give you one example here. If you haven't seen this stimulus before, raise your hand if you can tell what it is. Raise your hand if you can tell what that is. OK, some of you didn't quite get it yet. If you don't see it, don't worry. There's nothing wrong with you. It's a little subtle. It's a face in profile, eyes, nose, mouth. Everyone got it? OK, so here's the thing. That's the same stimulus, it's just upside down. Another version of the face inversion effect. In this case, you can't even make yourself see the face when it's upside down. If you think you see the upside down version of the face, you probably have the wrong bits. The thing you think is a nose probably isn't, et cetera. OK, so this is an extreme version of the face inversion effect. And it's a gift to an experimental psychologist. Why is that such a gift? Because it's the same damn stimulus. But in one case you see a face, in another case you don't. All we did was tip it upside down. And the response of the fusiform face area is much stronger to the upright version when you see the face than to the inverted version when you don't. So that enables us to stifle a whole line of attack from all of these hard core vision people who early on said, Kanwisher, your face area isn't really selective for faces. It's selective for these spatial frequencies or those, that kind of contrast, or this kind of shading information. It's like, no, same stimulus. It's just upside down. It makes all the difference. It's really whether you see a face or not. Yeah? AUDIENCE: When you were measuring the response of that example, did you have it so that at first, when people that the first time didn't recognize it and then you told them? NANCY KANWISHER: We did that later. Not in this experiment, but we did that later. AUDIENCE: Are they looked see like what changed [INAUDIBLE]?? NANCY KANWISHER: OK, so it's a great question. And there's a lot you could do with that. And actually, I think other people have published studies like that since. I can't quite remember who all has done it. But what we did was most of our subjects, especially in the context of a whole experiment, we chose stimuli so that most people could see the face in most of the upright stimuli and most people could not see the face in most of the inverted stimuli. It wasn't perfect at all. They didn't see faces in all of the upright ones and they didn't fail to see them in all of the inverted ones. And that's probably why this difference in response is not 2 to 1, but it's close. But you could do lots of other experiments like that, and you should think about what kinds of designs would be good ones to do and what it would enable you to test exactly. All right. So OK, I'm, as usual taking too long to do things so I'm just going to throw out some questions for you to percolate on and we will come back to them later in the course. Do these data-- the fact that you can see this so robustly in all subjects and that all this evidence suggests it's really very selective for faces-- does that tell us that this region is innate? It's in the same place, more or less, in pretty much everyone. Does that mean it's innate? Think about it, OK? It's not immediately obvious. Another question, does the fact that this thing responds so selectively to faces in pretty much everyone mean that it's necessary for face recognition? What do you guys think about that? In the sense of, does that necessarily mean that if you lost that thing, you wouldn't be able to recognize faces? Isabelle. Is that Isabelle? AUDIENCE: Yes. Well, I would think to really test that hypothesis, you'd have to find someone that [INAUDIBLE] in that specific area. NANCY KANWISHER: Exactly. Exactly. Exactly, and we'll talk more about that in a moment. The critical thing is that it's fabulous and powerful and cool to be able to find this thing in everybody, measure its response. It's taken us very far. But just the fact that people have that thing doesn't tell us that you need it for face recognition. It just tells you it turns on when you recognize faces. This is really important. We'll keep coming around to this. Does this tell us how face recognition actually works in the human brain? No. I mean, it's important, but it's barely step zero. Unfortunately, the field is kind of still at step zero for most things. Step zero's better than I guess, I don't know, maybe I should call it step one. Anyway, it's something, but doesn't tell us how it works. OK. All right, so advantages and disadvantages of functional MRI. Advantages, it is, as I mentioned last time, the best spatial resolution available for studies on normal subjects without opening their heads. That's what it means to say noninvasive. Disadvantages, as I just said, we don't know-- just because we see a response there doesn't mean that that region is causally involved in perception or cognition or experience. We don't know exactly what is going on at a neural level underlying that bold response, that blood flow change. It could be any metabolic change, not necessarily neuronal spiking. So it's a little bit-- it's very indirect and a little imprecise. Spatial resolution is much better than anything else in humans, but it's appallingly bad compared to anything that people who work on animals can do or they routinely record from individual neurons or even dendrites on a neuron. We are summing over hundreds of thousands of neurons in each pixel or voxel that we measure with functional MRI. It's very expensive. It's a little cheaper than that here, but in most places it's more than $600 an hour. That is a lot. There are other-- there are parts of the brain where it's really hard to get any signal for various physics-y reasons. And it makes a loud noise, which is not always a problem, but it's a problem for some things like scanning infants or like doing auditory experiments. The temporal resolution is not even close to the time scale on which vision happens. So vision is really fast and functional MRI is really slow. Right? It's slow, why is it slow? Yeah. AUDIENCE: Blood levels take time to change. NANCY KANWISHER: Yeah. Just takes a long time for blood flow to change after the increase in neural activity. All right. OK, so back to our questions that we're asking about face perception. Where do we get with functional MRI? Well, actually from both behavior and functional MRI, it kind of looks like we have a distinct system for recognizing faces than for recognizing everything else. I don't think we've totally nailed it. Yes. AUDIENCE: So quick question regarding the fMRI. So the resolution is field of a couple of seconds? [INAUDIBLE]? NANCY KANWISHER: Yeah, some people would say you could get it down to a couple hundred milliseconds but that's debated. You have to go to great lengths to do that. Normal functional MRI, a couple of seconds at best. Yeah. All right. So let's consider this next question. How fast does face recognition happen? Now, that may seem like a completely arbitrary question to ask, but it's not. Remember, we're trying to understand the computations that are running in your head when you recognize faces. And you might imagine some computations that are iterative-- that involve multiple repeated testing of hypotheses, generative models, whatever-- things that involve lots of iterated feedback versus things where you just have a feed forward sweep up the visual system. And so there might be very different time scales for those different kinds of mental processes. So we just went through this. Functional MRI is not going to answer this question. It's just not. It's a bummer, but that's life. We're adults, we're going to just move on and use a different method. OK, so there's a bunch of different methods. One is kind of been around forever. You glue electrodes on the head, right? Sometimes you push the hair apart or try to find bald people and glue electrodes right on there. And you can use, in the old days, about 10 electrodes, or you can use in more modern devices these nets with a few hundred electrodes that you settle onto the head. And so then you just measure directly electrical potentials right on the scalp. So what's cool about that is it's totally non-invasive. And it gives you a beautiful online temporal measure of underlying neural activity. What's not so cool about it is that electrical potentials blur all over the scalp and the spatial resolution is really awful. So the analogy has been made that it would be like sticking a microphone on the inside of the top of a football stadium and collecting audio there. You would know when a touchdown was scored. There's a lot of noise all over. It's like, OK, there's an event, we detected that event, right? You might be able to tell a touchdown from something else. I don't know about football so I can't tell you what else. Anyway, something else, some other event that could happen. OK, so that will be useful for some things, but kind of crude. But you'd have a hell of a time telling anything else, like what one person is saying to another person in the bleachers. So that's the old analogy. This is changing slightly, and we'll get to that later. But first, I want to briefly mention one of the assigned readings that I just hoped you guys could figure out on your own. But just in case you were confused about it, the point I wanted you to get from the Thorpe reading is he's asking how quickly can we tell if an image contains an animal or not? It's a kind of way to say, how fast is object recognition? So what does he do? He has people look at a bunch of images and they press this button if it has an animal in this button if it doesn't. Really simple task. So first question is, why not just use those reaction times? We can measure how fast it takes for people to press a button after the image comes on. Why not just use that? Does that tell us how fast object recognition occurs? Yeah, Jimmy. AUDIENCE: It doesn't because if you perceive that and then it also activates the motor neurons and it takes time to respond. NANCY KANWISHER: Yeah, you have to take all that time to figure out, OK, I see the animal. OK, which button is that? And then which finger do I push? And then you've got to send a signal all the way down here, conduction velocity all the way down to your finger, that takes a long time. And so it includes all that motor stuff in with the perceptual stuff. We could make some guesses about how long that motor stuff takes, but it's still not very precise. So the point of the Thorpe paper is they're basically trying to collect a reaction time out of the neurons in the head, right? So they're trying to actually collect-- it's essentially what they're collecting in this case is more of the motor response because they're collecting responses over frontal lobes, right? And we haven't talked about this much. But all of the visual stuff we've been talking about all happens in the back of the head. More motor planning stuff mostly happens in the front of the head. And so they're collecting responses out of here, averaging over a bunch of frontal responses. And they see the average response when there's an animal-- this is just potential average over those frontal electrodes-- is like this. And when there's no animal it's like that. And so what does that tell us about how fast people can distinguish whether an image has an animal or not? Yes? Yeah. AUDIENCE: It's less than that number. NANCY KANWISHER: Less than? AUDIENCE: 150, 160. NANCY KANWISHER: OK, why less than 150? AUDIENCE: I've read the paper so it's kind of cheating, so. NANCY KANWISHER: That's OK. That's good. That's fine. Go ahead. AUDIENCE: It gives you around-- the 150 second is giving you a [INAUDIBLE] saying some process has been registered and now you're trying to do something else in the case of non-animals. NANCY KANWISHER: Right. AUDIENCE: So the deviation starts getting you that OK, two different actions have started taking place. NANCY KANWISHER: Yep. AUDIENCE: So by that time, the image ought to have been sort of fully processed. So that should be something less than that number. NANCY KANWISHER: Yeah. Yeah, did everybody get that? It's actually quite subtle. So the key thing is, these curves diverge right there at 150. So that tells you that by 150 milliseconds, something in your brain is happening different if there's an animal and not an animal. That's the key question. But what is that something? It may be your motor preparation of the response. In that case, the actual visual part happened before, because you wouldn't know which button to press if you hadn't already recognized it. So it's an upper bound for when that process happened, because maybe it happened before and we're looking at a later stage, OK? Does that make sense? But also, it's an upper bound for the beginning of that process. Because the fact that those electrode responses have diverged doesn't mean you've finished processing whether it's an animal or not. So it's kind of a subtle business reasoning from this. OK, so that's all that. So that's a case with detecting animals. What about faces, to get back to our theme for today? Yes, you can learn about the speed of face detection at least with the ERPs. And so here's the first paper that did that back in 1996. They had electrodes where are these? Just right around here and here. I actually have those electrode locations tattooed on my scalp, color-coded anyway. Yes? AUDIENCE: Is ERP just the same as an EEG, just in a specific plan e? NANCY KANWISHER: Yes, exactly. It's the same as an EEG except what you do is you time lock the data collection to stimulus onset. So it actually stands for Event-Related Potential. And the reason it's event-related is you collect all those trials and you time lock to stimulus onset, and then you signal average. I had a slide on that but I took it out. It was too detailed. But that's exactly the idea, yeah. So here, stimulus onset is right around here. This is time going this way. And what you see-- it's hard to see here, but the faces are right there. And at 170 milliseconds after stimulus onset, there's a bigger bump for faces at an electrode approximately here. And even more so-- actually, even more so over the right hemisphere right there. Compared to cars and scrambled faces and stuff like that. Yeah? AUDIENCE: What is ERP exactly measuring? Is it just activity? NANCY KANWISHER: Yeah. So again, it's electrodes glued on your scalp or just stuck there with some kind of icky gel. And so they're just measuring potentials. And so the idea is that's neural activity somewhere underneath those electrodes, but maybe anywhere within inches. Like a long-- probably average is over much of the whole lobe underneath. So it's very spatially blurry, but it's giving you summed idea of activity under that electrode. Make sense? Electrical activity, because it's the direct electrical consequence of neural activity, it's very precisely time locked, unlike functional MRI, which is going by way of blood flow. OK, so that tells us that we have a face-specific response at 170 milliseconds. And that's sort of more evidence that there might be something special in the brain for face recognition. That's useful. It tells us that faces are discriminated from non-faces, or they've begun to be discriminated from non-faces by 170 milliseconds after the stimulus comes on. Make sense? OK, now do we know whether the signals coming from the fusiform face area? No, we have no idea. It's probably somewhere in the back of the head, because you get it better with electrodes back here than electrodes up here. But that's about it. That's all you can tell. So can we do a little bit better localizing the source of that signal? Well, maybe a hair better using a very similar method called magnetoencephalography. So this is a picture that Chris Brewer took of Leyla Isik postdoc in my lab, and me and the MEG system. This is in on the other side of the building. So MEG is a lot like EEG and ERPs except that it detects magnetic fields, not electric fields. And it does this by having these several hundred devices that are placed right next to your head in this big hairdryer thing. There's 300 devices in there that measure teeny tiny magnetic field changes that happen with neural activity. And the crux of the idea is this is a cross-section through the brain. So remember in Graybiel's dissection, this is cortex here and this is underlying. What is this stuff underneath it? Sorry? AUDIENCE: White matter. NANCY KANWISHER: White matter, yeah. Well, those are all the fibers. OK, so the activity that underlies perception and cognition mostly happens in the gray matter where the cell bodies are. And so a lot of that activity goes in a direction perpendicular to the cortical orientation with these cells that cross the cortical surface like that. So if you remember 8.02-- if you have activity that's going through the cortex like this, right hand rule, the magnetic field here is going to be a consequence of that electrical activity in this direction. It's going to mostly stay within the cortex. Everybody see how that's true? That's not so great, because our detectors are out there, outside the cortex. However, consider the activity that's in the sulcus in here, in this fold of the brain. Electrical activity in this direction, right hand rule, will stick outside the brain. And we can detect it with our magnetic sensors. Does that make sense? So you can sort of see most cortical activity better if it's in a sulcus, or at least in part of the cortical surface that's perpendicular to the scalp where the detectors are just because of the orientation in the right hand rule. OK, so it primarily sees activity in the folds or sulci, not in the outer bumps gyri. Field strengths are minuscule as a consequence of neural activity. So the fields we measure are 10 to the minus 13th Tesla, a million times weaker than the Earth's magnetic field. So you can imagine that if you set up an MEG system you need a lot of shielding. We had a whole rigmarole when the MEG system was set up in this building because it's right near the subway and the train. And so there are many, many layers of copper shielding to protect it. So we can detect these teeny tiny magnetic fields from the brain's activity separated from the noise of the outside world, which is much greater in magnitude. OK, so-- all right. So actually, MEG was invented here at MIT by this guy, David Cohen. And this is the first MEG device ever built, very cool, way back in 1968. And what can it tell us about face perception? Well, a lot. I'll give you just one rudimentary example. That M170 that you can detect with scalp electrodes, you can also detect with magnetic sensors on the head. So here's some of our data from a long time ago. This is the strength of the magnetic field at sites right about out here. And you can see a face-selective response also at 170 milliseconds, just like you can with scalp electrodes. So that tells us that at least you've started to detect faces by 170 milliseconds. That's pretty fast. And again, it's more evidence that there's specialized machinery. These data don't yet go beyond the EEG data, the ERP data from electrical potentials. But they might, in principle, and there's lots of ongoing work trying to do that. OK, overview, advantages of these methods, both EEG and MEG. They're non-invasive-- that means you don't need to open the head. A very good thing, especially if you're the subject. They have very good temporal resolution. And if we want to see computations unfolding over time in the brain, this is a good way. I just said why we'd I care about that. OK, so far-- well, never mind, I'm going to skip this point. Not that important. We will get back and do more sophisticated things with EEG and MEG in subsequent lectures. Disadvantages-- spatial resolution is terrible. And this is another kind of ill-posed problem. So just as the brain is facing lots of ill-posed problems in perception and cognition, we scientists are facing ill-posed problems when we collect electrical or magnetic activity at the scalp and try to infer the exact location in the brain where it's coming from. It's a similar problem to the problem of invariant object recognition. There are many possible configurations of sources in the brain that could give rise to the same set of electrical and magnetic fields out of the scalp. And that means it's ill-posed. We don't have a way to get a unique solution. So all that to say we can't figure out the exact sources. We can make some guesses, but it's not very good. So what do we do? Just give up? No, we use another method. So here's an amazing method. This is the one method in humans that gives us high resolution in both space and time. And that's when we have the very rare opportunity to record directly from inside the human brain. This happens only in the context of neurosurgery. So neurosurgical patients-- like this guy here, who you'll meet in a little bit-- this guy had intractable epilepsy. And most people with epilepsy are treated well by drugs that suppress seizures. But some people are just not responsive to drugs. And if the seizures are bad enough, they can be totally life disrupting. If they happen dozens of times a day, you just can't live a normal life. And under those rather extreme circumstances, sometimes the best option is neurosurgery. That is, trying to find the source of those seizures and trying to remove it surgically. OK, so you hope you never have to go through this or anyone you care about has to go through it. It's no picnic. But actually, this surgical treatment is often very effective. So when neurosurgeons decide to do this, they have to remove a whole piece of skull bone to get access to the brain. They have to go through what structure that Ann Graybiel showed you in her dissection the other day. What do you have to-- after you take off the a skull patch? Yes. AUDIENCE: Dura mater. NANCY KANWISHER: Dura mater, exactly. That nice big piece of white, leathery stuff that was sitting over the surface of the brain. So you to take off a piece of skull, then you need to cut through and push apart the dura. And then what they sometimes do is stick electrodes straight on the surface of the brain. And they do that for two reasons. One, if they have enough of them sampled far enough apart, they can kind of triangulate and figure out where is the source of the seizure. So the patient hangs out in the hospital for a week or so with these electrodes in their head waiting to have seizures. And then when they have a seizure, the clinicians can figure out where the source is so they know what bit to cut out. The other reason to do this is to map functions. Because once the surgeons decide they have to go in and cut, they want to try to not cut out any of the most important parts. I don't know what it means to have unimportant parts of the brain, but they try to avoid language regions and stuff like that because then patients really notice if they lose those things or motor control regions. OK, so they map out functions where they might be planning their route. OK, make sense? Now, some of these patients are very kind and generous to the world and say, yes, you scientists can measure responses in my brain while I look at your damn stimuli. And so whenever we can, we ask them please, please, please, can we show you some pictures or play you some tones or have you read some sentences while we record from your brain. And some of those patients very kindly let us do that. And that gives us the most amazing data you can get from human brains. So for example, I had a rare opportunity to do this a few years ago from this lovely guy who was undergoing neurosurgery in Japan. And while he had electrodes in his brain, a colleague of mine was there and emailed me and said, look where these electrodes are-- right near regions I care about-- do you want to show us some stimuli and we'll record responses from those electrodes? And I said, damn straight I want to send you some stimuli. So my students and I stayed up for a couple of days and made some stimuli and shot them to Japan and got some responses from those very electrodes. And here they are. So this is a strip of two parallel strips of electrodes right along the fusiform gyrus, right where the fusiform face area should be in most people. And here are the responses of each of those electrodes. 174 is here, what's 173 and so forth. And what you see is this batch of electrodes right here-- this is a response when the patient was looking at faces. And these are the responses when they looked at a whole bunch of different kinds of stimuli. Objects, and this guy is Japanese so we showed him Kana and Kanji and digit strings and other kinds of stuff. Very low response to those other things. This is a extremely selective response. It's much more selective than you see with functional MRI because we were recording directly from the surface of the brain. Further, we have time information. This axis here is time, and you can see that that response-- well, you can't see the axis, but that response starts up at around 1:30 milliseconds and peaks up there at around 170. Everybody clear what we're seeing here and why this is so vastly better than either functional MRI or MEG or ERPs or anything else? Make sense? OK, so these are very, very precious data. OK, nonetheless, the electrodes in this case are about 2 millimeters across, each electrode. And that is about the size of a functional MRI pixel or voxel, a little bit smaller. It has less blurring because functional MRI blurs spatially because it's looking at blood flow. So this is a more precise spatial measurement than functional MRI, but it is still averaging over probably tens of thousands of neurons, down from hundreds of thousands of neurons with functional MRI. So can we ever get responses from individual neurons in the human brain? Yes, occasionally. In fact, a paper came out on the bioRxiv a couple of months ago. I was on this guy's PhD thesis defense. And this is a guy who works with a neurosurgeon on Long Island. And this neurosurgeon specializes in epilepsy neurosurgery. And he's very interested in not damaging people's ability to recognize faces. And so he sticks electrodes to map out neural activity and to discover seizure foci. Before the neurosurgery, he sticks electrodes in parts of the brain near the fusiform face area. So this is a slice like this through the brain. I showed you before horizontal slices, OK, so left and right are flipped, that region is right in there, everybody oriented with this picture here? So this is an MRI image of this person. It was scanned with functional MRI before the electrodes were put in. And that shows you their fusiform face area right there. So now, the neurosurgeons put in electrodes for clinical reasons, but the electrodes this surgeon uses have these little tiny micro wires that come out of the tip of the electrode that enable him to record from individual neurons. And so these guys, for the first time, have recorded from individual neurons in the fusiform face area in humans. And here's an example of one of these neurons. So here are the different stimuli here. A bunch of different face stimuli, body stimuli, houses, patterns, and tools. And this shows you time across here. Each one of those dots is-- this is all the response of a single neuron that's been identified in a human brain. Each dot is an action potential, is a spike out of that neuron. So you can see them happening over time here to all the faces. And this is an average amount of activity to all of the faces and average amount of activity to all the other stimuli. Make sense? So that's pretty breathtaking to me because I've been using these very indirect methods for a long time, inferring that they must result from the average across a lot of neurons doing that, but it's pretty awesome to actually see individual neurons doing that. Yeah? OK. Here's the time course of responses just averaging over this raster over time, showing you a similar time course to what I've shown before. And in this guy's thesis, he found three other face selective neurons in the FFA, but the electrodes are so rarely in the right location that they only have a few in this whole thesis, and there they are. Yeah? AUDIENCE: Even if we could measure individual neurons, we don't really know which neuron it is, right? If I wanted to go back and find the same neuron Again, That's pretty much impossible. NANCY KANWISHER: Forget it. Yep. Yep. So people like me who almost never get to see responses from individual neurons in human brains have kind of neuron envy. It's like everyone else in this building has-- they can measure stuff from dendrites or ion channels or individual neurons. They can do all this amazing stuff. But actually, there are a lot of limitations in those methods too. And you just put your finger on one of them. So they're like, OK they found those neurons, there are four neurons. We can't go back and find those neurons again. That's that, right? And they're probably subtly different in different brains, right? So it's cool and powerful but has still has many limitations. OK, does this tell us that these neurons are involved in discriminating one face from another or just detecting faces? Can we tell from these data? Are they just saying, here's a face or are they saying, that's Joe? AUDIENCE: Did they have different conditions for different people? NANCY KANWISHER: These are different faces here. What do you think? What are these neurons doing? Yeah? AUDIENCE: They're just recognizing faces [INAUDIBLE].. NANCY KANWISHER: You mean just detecting? No, just say more. What do you think they're doing? AUDIENCE: They're just selecting for faces. There's no evidence to show that they distinguished different faces. NANCY KANWISHER: Well, how about this? These are different faces here. These are different faces here. AUDIENCE: But one could ask, if it does involve them sort of acknowledging what faces, did they have to put a name to the face? NANCY KANWISHER: Nope, they're just sitting there looking at stuff. So bottom line is, we don't know from this. It could be just responding and saying essentially, there's a face. But the fact that there's different responses to different faces suggests that maybe there's some information in there. If you ran some machine learning code on this, you could tell a little bit, which face was being presented. Because those neurons are responding differently to different faces. Yeah? AUDIENCE: Is it really like if they just showed the same face repeatedly, wouldn't it just be like [INAUDIBLE]? NANCY KANWISHER: OK, very good question. Very good question. That's why I said suggest, right? You're absolutely right. That could be just noise. It could be that if you presented the same face every time you'd get that same distribution. You're exactly right. And so we will talk-- not next time, I think Wednesday next week. But anyways, very soon we'll talk about methods that enable us to exactly deal with that question and ask, is there actually information in this pattern of response across neurons or voxels or whatever it is? Or is that just the noise of variation? Yeah? OK. AUDIENCE: But how many neurons are in [INAUDIBLE]?? NANCY KANWISHER: Oh good question. Let's see. I would say, I think a few million. So let's think about it. Each voxel is about a half a million, and they are typically maybe like 30 voxels, something like that. Somewhere on the order of 20 million, something like that. I mean, with huge error bars. OK, so this is cool and tantalizing, but it doesn't even tell us what these neurons-- what exactly they're participating in. It doesn't tell us if those neurons are telling that person which face is there or maybe what facial expression the person has or how old they are or whether they're male or female or God knows what else, right? And it certainly doesn't tell us how those neurons get that information. Still, it's cool. OK, so intracranial recording, both with the grids that I showed you and the single unit version. Advantages are, this is the only method in humans that has both pretty good spatial resolution and temporal resolution at the same time. Disadvantage-- well, you need to have a craniotomy, which is no picnic, to put it mildly. You need to have a huge piece of your skull removed and neurosurgery. And that means that the only times we get to do this are when it's required clinically and everything is under control of the doctors, as it should be. So the doctors make all the choices about where the electrodes go, and we just get to sit in the background and say, please, please, please, look at these stimuli, but try not to hassle the patients too much. Right now there's a patient in Albany, New York who has electrodes right over a really exciting part of the brain to us that I'll talk about in a few months. This patient has electrodes that respond specifically to music. We will talk about that later. It's pretty amazing. And for the last couple of days, Dana and I have been-- mostly Dana has been collecting stimuli because we really want to ask questions about the response of those electrodes. And this patient is not too thrilled listening to our stimuli. So we finally said, oh, OK, tell the patient they can just do Instagram on their phone and we'll play the stimuli in the background. So hopefully, we'll have cool data from that soon. OK, so to say that these data are limited and hard to control is an understatement. We basically can't control it at all. All we can control occasionally is the stimuli. And it also, like functional MRI, just because we see those beautiful responses, it doesn't tell us how those responses are connected to behavior. So that's a real challenge. So that won't do. We need to get beyond this problem. I keep saying this method is great, but it doesn't tell us the causal role of that neural phenomenon in cognition and behavior. As scientists, science is all about discovering causal mechanisms. We're not just interested in what is correlated with what, we want to know what's causing what. That's really of the essence, and so we need to do better here. So what are we going to do? Somebody mentioned a while ago, maybe it was Isabelle, that one of the ways to do that and ask whether the face area is causally involved in face perception is to look at a case where the face area is altered. So there's a bunch of ways to do that. And one of them-- OK, that's just a review. We said faces are recognized fast but we haven't learned much more. How do we test causality? OK, patients with focal brain damage. Here is a patient. These are vertical slices through the back of this patient's head. OK, let me get oriented. The slice is maybe this here. And as you go rightward, you're marching back in the brain like that. Everybody oriented? What's this thing right there? AUDIENCE: Cerebellum. NANCY KANWISHER: Yeah, cerebellum, right. That thing right there is this patient's lesion that spans several slices going back like that. And this patient's lesion looks a whole lot like my FFA. There's my FFA, greater response to faces than objects, on similar slices. We don't have functional MRI from this patient so we don't know exactly where this guy's FFA was. But there's a good bet that it was blitzed by that lesion, because it's right in the zone where it usually lands. And this patient can't recognize faces at all. And importantly, the patient is absolutely normal at recognizing objects. No problem whatsoever at recognizing objects. How does this take us beyond functional MRI? Yeah? AUDIENCE: It implies causation. NANCY KANWISHER: Speak up. AUDIENCE: It implies causation. NANCY KANWISHER: Yeah, say more. What does it tell us? AUDIENCE: So because of the fact that area's damaged and then it makes it to not be able to recognize faces and I can see that there's causality that oh, that area's [INAUDIBLE]. NANCY KANWISHER: Exactly. Exactly. It says you need that bit to recognize faces. But also says something else. What else does it? AUDIENCE: That you don't need it for recognizing objects. NANCY KANWISHER: You don't need it for recognizing objects. So this is actually really strong evidence that that bit of brain is very specialized for face recognition. Specialized and necessary for face recognition. OK, so-- AUDIENCE: Can that person still detect faces? NANCY KANWISHER: Oh, yes. Good question, absolutely. OK, so let me just distinguish-- this person here has prosopagnosia-- that means a selective deficit in face recognition-- like Jacob Hodes, who I described yesterday, who has no brain damage whatsoever but has just never been able to recognize faces at any point in his life. So this syndrome can arise just from some weird developmental thing where you're atypical and you're just really bad at it, or it can result from damage to that part of the brain. So now we're talking about the case of damage, but in both cases, people with prosopagnosia have no problem knowing that a face is a face. They just don't know who it is. Yeah? AUDIENCE: Has there ever been a case of problems of people who can't recognize faces who [INAUDIBLE]? NANCY KANWISHER: Indeed. Indeed. Jacob Hodes, who I talked about last time who is just absolutely awful at face recognition, including family members, close friends, can't do it, like not at all. He has a very normal looking fusiform face area. So after I told you that I had that conversation with him a dozen years ago or something like that, I scanned him. And he had a beautiful fusiform face area, like textbook. It looked-- well, looked like mine, which is a damn fine one if I do say so myself. And I looked at that and I went, oh shit. I better publish this before someone else does. And I didn't get my act together, and then a whole bunch of papers came out saying, oh, people with developmental prosopagnosia have normal looking face areas. Take that, Kanwisher. What do you say about that? And it was a little shocking. But upon further reflection, it's not really devastating, right? I mean, it's bracing, it's informative. But it tells you that having a face area that is a region that responds more to faces and objects isn't sufficient for normal face recognition, right? You need other stuff. What might that other stuff be? Well, the circuits in there need to work right. It's not enough to just respond more to faces and objects. To recognize faces, they need to be able to distinguish faces from objects. We don't know if that's working right. What else do you need? AUDIENCE: Memory. NANCY KANWISHER: Memory, absolutely. Yes, you need to remember faces. What else? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Could be, but in Jacob's case, it was close friends he couldn't recognize. So what's another possible account of how could he have a normal face area and yeah? David? AUDIENCE: It might be a gap between recognizing a face and connecting that to recognizing a person. NANCY KANWISHER: Yeah. Yeah. Or to put that neuroanatomically, you got to get the information out of there. maybe, for all we know, that little face area is working perfectly. Maybe that face area knows who that person is, in a sense. But if the connection's out of that brain region to the rest of the brain are messed up, it doesn't do you any good. You need to be able to read that information out and act on the basis of it. Anyway, that's a big sidebar. Point is, you can have prosopagnosia either as just a developmental disorder or as a result of brain damage. Oh, God, I knew this was going to happen. All right, so OK, so very briefly, it messes up ability to discriminate and recognize faces, not your ability to detect a face, right? So as [INAUDIBLE] had asked, it's not just you can't tell the thing is a face, they're fine with that. Importantly, they are normal and voice recognition. So it's not that they're confused about distinguishing one person from another. They can do it fine from audition, just not from vision. In the rare cases where the lesion is small, it can be very specific, leaving object recognition intact. More often, there's kind of a blurry mess. You have a big lesion and a bunch of things are affected. OK, so we've talked about that. OK now, it's very important in neuropsychology reasoning-- like we want to say, OK, that's really powerful, the case of prosopagnosia. You lose that bit, you can't recognize faces. And that establishes a kind of causality that we didn't have before with just functional MRI. But is that sufficient to say that that region is specialized for face recognition only? It's not. Whenever I ask this, the answer's no. Your task is to say, why? How could you have great difficulty at recognizing faces and be OK at object recognition? And yet, not have a deficit that's specific to faces? How might that arise? You guys have suggested this hypothesis in different context before. Yes? You look like you know. No? AUDIENCE: But I just you could do other things. It doesn't have to only be for facial recognition. Because it response to animals [INAUDIBLE],, right? NANCY KANWISHER: Sort of. But the question here is-- OK, let's just start bare bones. You have a lesion, you get around fine in the world, you can do everything else but you have a real problem recognizing faces. Does that mean that the region lesioned is specialized for face recognition per se? AUDIENCE: There might be other [INAUDIBLE].. NANCY KANWISHER: That's true. There could be other things going on, absolutely. But let's suppose they're not. Let's suppose you had good reason to think there weren't. Yeah? AUDIENCE: It could be-- it'd be a path-- it could be one point in a pathway. NANCY KANWISHER: That's true. It could be totally a point in a pathway. Absolutely, that's another account. What else? AUDIENCE: Well, I couldn't get last comment, so. NANCY KANWISHER: He said maybe you damage a pathway. Yeah? AUDIENCE: Maybe there's some other function we haven't tested in that person. NANCY KANWISHER: All these are very good alternative hypotheses. You guys are very good at this. The one I'm fishing for is, maybe face recognition is just harder than object recognition. Maybe the part that's damaged is just generically involved in object recognition, but you damage part of the object recognition system and face recognition takes a bigger hit because it's harder. Right? Does that make sense? Do you see how the case of prosopagnosia is consistent with that? So that means we cannot infer from these data alone that that region's specialized for face recognition. Now, we can do various things like test them on really hard versions of object recognition. And people have done that. But there's another kind of data that are really powerful here. And that's when we have the opposite syndrome. So there's only a couple of cases of this. The best one is called CK, published in a paper in 1997. You don't need to remember that. The point about this is that this guy has the opposite syndrome. He's severely impaired at object recognition. He can't tell a chair from a table from a car from a toaster, but he's 100% normal at face recognition. Totally normal at face recognition. In fact, better than average. Do you see how that's in some ways even more powerful evidence that face recognition goes on in specialized brain machinery than the case of prosopagnosia? Face recognition isn't even a special thing that sits on top of normal object recognition. It's a totally different pathway. You can have no ability to recognize objects and your OK a face recognition. Does everybody see how that's really powerful? And how those two kinds of evidence together are vastly more powerful than either one alone. Well, that's called a double association. We'll skip all of that for now. Doubled associations are particularly powerful examples-- powerful forms of evidence in cognitive neuroscience where we have opposite syndromes that collectively make it really hard to wiggle out and come up with alternative accounts other than that there's a bit of brain that's really specialized for face recognition. It's not just that face recognition is harder, or else you'd never get this syndrome. All right, I just wanted to finish that point. OK now, how much time do I have until the quiz? 15 minutes, OK good. AUDIENCE: 13. NANCY KANWISHER: OK, good. We're going to skip over TMS. I'm sorry about that, guys. Someday I'll learn to time things in a lecture. Actually, I knew this was going to happen, I just-- we'll get back to TMS later. And we will skip to the most amazing method in all of cognitive neuroscience for which-- we're going to come back to this dude who you met before who has the face selective responses in that part of his brain. Remember how I said that even though these data are gorgeous and spectacular and the only way we can get high spatial and temporal resolution together, but they don't tell us causality? Right? That's true here? Resolution doesn't get you causality. To test the causal role of something, you need to mess with it. So it turns out that sometimes the neurosurgeons electrically stimulate through those same electrodes. And they do that to test the function of those regions causally. They also do it to test their hypotheses about the location of the seizure foci. So in those rare cases, where you have a patient like this with selective electrodes like that where the clinicians decide that they are going to electrically stimulate through some of those electrodes, then we're in a position to kind of have it all scientifically, right? I don't mean to be so crude. This is a horrible situation for that lovely guy to be in, but scientifically, it's extremely powerful. So I'm going to show you-- we did in fact have an opportunity. The same guys in Japan emailed me and said, OK, we're going to be stimulating that electrode. What do we do? And I said, OK, have him look at faces and have them look at other objects and ask him if anything changes. And I'm going to show you a video of what happens when that goes on. OK, here we go. Oh, and I need to turn on the audio. OK, he's getting stimulated right there and he says-- [VIDEO PLAYBACK] - [NON-ENGLISH SPEECH] NANCY KANWISHER: He's such a good subject, this guy. - One more time. - [NON-ENGLISH SPEECH] - His eyes. - [NON-ENGLISH SPEECH] NANCY KANWISHER: OK, that tells us that that region is causally involved in face perception. Is it causally involved in perception of things that aren't faces? He's getting stimulated in the same electrode. He doesn't know that there's a face area. - [NON-ENGLISH SPEECH] NANCY KANWISHER: He doesn't know which electrode is being stimulated. - [NON-ENGLISH SPEECH] NANCY KANWISHER: This is a Kanji character on a card here. - [NON-ENGLISH SPEECH] - One more time. - [NON-ENGLISH SPEECH] [END PLAYBACK] NANCY KANWISHER: Awesome, huh? What did we just learn? AUDIENCE: You can trigger it. NANCY KANWISHER: You can trigger it, yeah. Yeah. So what does that tell us about the function of that region? Why is this-- I mean, it's amazing to see, no question, but what does it tell us scientifically? AUDIENCE: It's specific. NANCY KANWISHER: Yeah. How does it tell us that it's specific? AUDIENCE: Because when you stimulate it, it particularly sees a face. NANCY KANWISHER: Yeah. And what happens when he's looking at things that aren't faces? AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Yeah. So if that region was causally involved in perception of things that aren't faces, you might think that it would distort-- the box would look different or the ball would look different or the Kanji would look different. It doesn't, there's just a face on top. So I think that's very strong evidence that that region is not only causally involved in face perception, but very specifically causally involved in face perception only. Everybody get that? Do I have to stop? OK. OK, I have another video. Consider-- and we'll get back to this later-- consider other alternative hypotheses to this. This is pretty powerful. This is more powerful than most of the other things I showed you, but there's always ways to come up with alternative hypotheses, and that's the business we're in here. So be percolating on what other control conditions you'd want from this guy to really believe these data. |
MIT_913_The_Human_Brain_Spring_2019 | 7_Category_Selectivity_Controversies_and_MVPA.txt | all right so i'm going to finish up some of the things that i talked about with experimental design last time and then we're going to get on and talk about category selective regions in the cortex which of course we've been talking about in various ways all along but i'll raise some general controversies about that some alternative views from the kind of one that i've been foisting on you and what i consider to be some of the strongest most important evidence against the view that i've been putting forth here and then we'll talk about decoding signals from brains okay that's the agenda here we go okay so last time i had you guys work in groups to think about experimental design because really most decisions about experimental design once you know the bare basics of the measure measurement methods they're just applying common sense thinking about what it's like for the subject how are you going to get the data you need so in terms of what exact conditions to run in any experiment i talked about the idea of a minimal pair this kind of platonic ideal of the perfect contrast which never exists in reality but that you aspire toward so ideally you want two conditions that are identical except for the one little thing that you're interested in okay and you don't want to have other things that co-vary with that thing you're manipulating other than the thing you're interested in and that's the crux of the matter in experimental design okay you guys talked about what kind of tasks have subjects do in the scanner there's a trade-off between kind of doing the most natural thing which is they're just lying there and stimuli come visual auditory whatever versus the fact that subjects might fall asleep if they have nothing to do and if they fall asleep you won't know and that's not good so it's sometimes better to have a task to keep them awake and to tell you that they're awake a key important point don't have one task for one stimulus condition and a different task for a different stimulus condition if you did that would be a sorry confound exactly that would be a confound okay don't do that um we talked about baseline conditions um so for example in a vision experiment staring at a dot or a cross is a kind of as as far as you can go in turning off your visual system why would you want to bother with that um well it's sometimes useful um to have that kind of baseline because we want we sometimes want to look not just at a difference between two conditions remember one condition alone in mri tells you not a damn thing all we can see is differences but even just two conditions showing you a difference that's something but it can be ambiguous so for example if you had a situation like this where there was a response in some brain region to the red condition here in the green condition there they're just two numbers that's all you have uh that is different that's kind of meh you know there's a difference but meh right but if you have a good baseline and you really know that zero is zero um or as close to zero as you can get and now imagine if zero was here that'd be like wow that's a really strong effect and especially in neuroscience where we care a lot as you may have noticed about selectivity about how much more we get a response in one condition than another and cell activities are usually more interesting as a ratio than as a difference as i'm illustrating here and so you can't get a ratio unless you have a third condition usually a baseline all right a few other things we talked about how you allocate subjects to conditions you could have all your subjects do you know one half of the subjects do the face condition for an hour in the scanner another half of your subjects do the object condition for an hour in the scanner that's no good we don't want to do that we want to within subjects design we want all the conditions within a subject whenever we can do that why well my best analogy to this is suppose we decided to grade your assignments as follows a third of the class is going to be graded only by heather this third of the class is going to be graded only by dana across the whole semester you guys are heather people you guys are dana people you guys are anya people is that fair no that's dumb what if heather's a hard ass and she is kind of a hard ass right [Laughter] not that you guys aren't they're they're all the pretty tough crew there i stand here just waiting for the gong to go wrong and you guys should do that i'm sure i've already said wrong things and you know it so next time sound the gong and correct me anyway that wouldn't be fair in grading exams and neither is a good and experimental design so for all the same reasons that you guys can hopefully get an intuition here you want to have all the conditions within a person because maybe one person's brain just activates more than another person's brain maybe this person had more coffee coffee increases your bold response we give away free espresso beans chocolate espresso beans before scans in my lab to increase the mri response okay all of that do designs within subjects whenever possible okay how do you allocate conditions to runs these kind of subsets of a whole hour-long experiment where you scan people for maybe five minutes at a time and give them a break and another five minutes well the same logic applies imagine you're in a scanner for an hour you're getting sleepy you're getting bored you're thinking about other things you're kind of not on the ball those things change over slow periods of time and so you want to get all those conditions together within a run just as you want to get conditions together within a subject whenever possible okay okay and so then we didn't really get into this and i think you did in your groups but how do we stick all these conditions together within a run do we clump them together in a batch or we do we interleave them and i think most of you guys realize that there's a there's this deep set of trade-offs there um and so you know here's a block what's sometimes called the block design where you clump a condition a whole bunch of trials with one condition then a whole bunch of trials of another with in this case some kind of baseline in between right versus a mixed interleaved condition which is called event related for uninteresting historical reasons um and if it's event related you can have it slow or fast okay so why what what are the why wouldn't you um what are the reasons okay what are the reasons to do this rather than that many of you guys came up with this last time so nothing earth sorry minion lights on biases yeah yeah what kind of biases in a blocked experiment they might be biased by what they've been looking for yeah all kinds of biases like consider this trial here in a yellow condition well you just did a bunch of yellow trials so maybe your yellow system is adapted out or something or biased somehow but you also know that the next one's going to be yellow and there's all that previous stuff and anticipatory stuff all on top of the actual effect of a single yellow trial yeah was that what you were going to say is yeah all of those things the effects of recent history doing the same thing and anticipation of the future all on top of what actually happens in this trial okay so those are not deal killers but they're you know things to be aware of so those are reasons why you might want to go with this condition or this condition why wouldn't you always do this alternate the order and not alternate randomize the order of conditions over time um and bunch them in together why is that not always a great idea people do that sometimes it's not a terrible idea but there are things to keep in mind here what's the challenge with that yeah one possible challenge is that the bold response has like a 10 second window so it doesn't describe delete exactly so these the bold responses here are going to be massively on top of each other that's why people sometimes do this it's like okay we'll have a random order and we'll put a big chunk of time in between but if you have to stick 10 seconds in between trials your subject is going to fall asleep and you're wasting you're spending all that expensive scan time you know not collecting enough trials right so there's you know none of these is right or wrong they're right or wrong in different conditions okay so as okay am i saying it right okay um as i mentioned the challenge here let me just give you my crude depiction of this so let's suppose you have a this is time a series of trials with a house a dot a face a dot a dot i don't know where that i want to face the house etc and each of those trials is one second long okay well let's imagine the response in the fusiform face area to that first house you get some kind of middling low response that's going to take many seconds to peak okay let's look at the response to this face well it's going to be higher and it's going to peak out there okay and so then you can look at the response of each of these things right and so you get this whole series of bold responses from each of those different trials okay but now here's the problem what we observe when we measure the response of a little voxel a little three-dimensional pixel in the brain is the sum of all of that something like this it'll be higher than that but some big blurry sum of all that so now we want to go backwards from observing this to seeing the difference between that and that and that's a problem okay so that's not great but here's the crazy thing it's not impossible it's not impossible because by weird mysterious to me still kind of unfathomable um physiological mechanisms these things add up approximately linearly it's really counterintuitive who would think a big sloppy biological system with many different causal steps could produce something that is approximately linear but it does and because they add up linearly if you have enough trials you can take this thing and recover that and that okay we're not going to go through the math of it it's just basically addition it's like solving for like you know it's it's solving for um you know multiple equations because you have all these different time points did everybody get the gist of the idea that even if you're observing something really slowly varying and weakly varying because it's massively blurred you could in principle with enough trials go backwards and solve for that and that everybody get that idea okay so what that means is it's a bit of an uphill battle to do a fast event related thing like you can't just look at the response you have to actually do a lot of math and you may or may not have enough trials to pull it out right but under some circumstances where you really need things to be interleaved you can pull that off okay all right so that's what i just said blah blah blah okay a few other design things that i didn't really talk about in detail one i've mentioned glancingly but i wanted to be more explicit about it this whole idea that we've talked about a few times of defining a region of the brain that we're going to look at with a uh with a localizer scan okay with functional mri we talked about that with the case of characterizing face areas go run a face versus object scan find the face area in that subject and then do new experiments and test it right or when you guys proposed your snake experiments you said first localize a candidate snake specific region with snakes versus non-snakes and then do repeated tests in that region that you found in each subject why do we have to do all that within each subject you don't technically have to lots of people don't but the reason i think it's important and the reason we do it in my lab and all of my intellectual descendants do that and lots of other people do too the reason we do that is that that region is not in exactly the same place in each subject okay so i have a dopey analogy brains are physically different from one person to the next if we scan you guys just anatomically and look at the structure of your brains your brains are as different from each other as your faces are that is you all have the same basic structure the same major lobes and sulcide just as you all have eyes and nose and mouth but they're in slightly different positions and that's just the anatomy the function on top of that is even more variable okay so uh it's like trying to align faces right so no matter if you have a bunch of photographs of faces and you try to align them on top of each other and superimpose them even if you have a few degrees of stretch you can't do it perfectly you'll get some kind of mess like this right they just they're different so they don't perfectly superimpose right well it's the same deal with brains you try to align them perfectly from one person to the next but they're physically different they do not perfectly superimpose okay so but you know so now imagine that this is totally crazy analogy but it's best i could come up with suppose you're a dermatologist and you're interested in skin cancers that arise in the upper lip what could happen there's more sunlight hitting the upper lip whatever okay so now you can take a whole and you're studying photographs to try to see how many people have it or something like that so you could take a whole bunch of photographs and you could just say okay i'm going to look right there it's usually going to be the upper lip but it's not always going to be the upper lip and so you're really throwing away a lot of information by choosing the wrong location for this person down here you missed it right you're looking at the wrong thing okay so in the same way if you want to study that region you've got to find it on each individual photograph and similarly if you want to study the fusiform face area or the snake area which doesn't exist but whatever um you gotta go find that thing in that person individually otherwise you're really blurring your data just as those data are blurred there make sense okay all right um good okay different topic about design these are just kind of different topics i couldn't find a good segues so far we have been talking about the most rudimentary simple possible experimental design that means two conditions faces on and objects snakes and non-snakes moving or stationary whatever two conditions where you contrast and you look in the brain is there a higher response to a than b okay nothing wrong with that you can get pretty far with this but first of all of course we can have more than two conditions so you can have one factor in this case stimulus category with many different conditions right aces bodies objects scenes whatever right okay so that's not rocket science we've just added a few more conditions of the same factor your factor is the you know the dimension you're varying in this case it's stimulus type okay but we could get fancy and we could have four conditions that are two factors varied orthogonally like this okay this is sometimes called a two by two design uh we're going to vary one thing on this axis and another thing on this axis okay why would we want to do that well let's look at an example now let's suppose that you were going to compare faces to objects in this case chairs but beyond just those two conditions of comparing the response in the brain when people are looking at faces versus objects we could now ask does a response in the brain to faces and objects depend on whether you're paying attention to the faces and objects what if you're paying attention to something else what if we have little colored letters right in the middle of the display and they're changing rapidly over time and your task is to monitor for a repetition of a letter a one-back task and it's going really fast so it's very demanding you're just looking at those letters they're flashing up a little two b's bum you hit a button boom it's like very demanding okay the information hitting your retina is still coming in from the face because the little letter is tiny it's not hiding much of the face what do you think if you're doing the letter task do you think you'll still get a response in the fusiform face area when the face comes up and will it be higher than when the chairs come up any intuitions yes talk to me about that it should be because i think what we've learned is the signals once it hits the medium and so still coming in yeah will it be just as high what do you think no i don't think it'd be as out but it doesn't since your response that's like higher than effort everybody see how this is kind of an interesting question right the machinery is the same all the feed forward stuff is the same you can't when i tell you just now you're doing the letter task now you're doing the face task when you switch to the letter task the wiring in your brain doesn't change all the same wiring is there that's it's a stimulus is still hitting your retina it's still going up the system so it becomes interesting to ask how could it be different right would it be different all right i just want you all in the grip of this is a question that we might ask so how could we ask this question um well um we can do as i just said we can have subjects in one case do their standard object task look for repetitions of fa of con consecutive repetitions of a face or of a chair we have all different kinds of chairs but every once in a while two in a row are the same okay um or we could have this other task where they're monitoring for letter repetitions so does everybody get this two by two design on one factor we're varying the stimulus is it faces or objects and on the um faces or objects those are the two conditions it's just terminology okay um and on the other factor we're varying um tasks are you doing the the face object task or the letter task yeah ben is it okay so what is it that this task was like what conclusions to allow you to draw the simpler customer good question good question anybody have an intuition here you mean other than just doing that never mind the letters yes exactly exactly the right question what do you guys what do you think is there any reason to do this does anybody care about this what would it tell us yeah i forget your name lauren yeah the effect of attention on perception yeah yeah so if we want to know not just is there some bit that responds more to faces and objects we've been doing that for weeks enough already right we know there is okay now we want to know does it matter what you're paying attention to is that thing like a just like a is it like a little machine that's going to do its thing no matter what or do you the perceiver have any control over it here's another version of that question you guys can all sit there and look bright-eyed and bushy-tailed and look at me and smile and nod and think about whatever you want to think about and i won't know you could be bored out of your mind thinking about you know what you did last night whatever and i won't know and that's great i mean isn't that nice that we human beings are not trapped by the stimulus that's in front of us at any moment instead we can control our mental processes to some degree right and if you choose to think about something else you go for it you have good judgment that is fine it happens to me all the time you have that ability i have that ability not really when i'm lecturing i kind of have to stay on task that's why it's exhausting but anyway um but you know we are not trapped we are not completely controlled by the sensory world impinging on us and that's a good thing and so if you wanted to find out about how that works and study how well we can control our own mental processes you would do something like this make sense okay all right okay so this design enables us to ask a whole bunch of things one does a response in some region or voxel or wherever we're looking depend on stimulus category okay this is what we've been talking about for a couple weeks now that by to do that you could just say okay is there an overall higher response to these two conditions than those two conditions you wouldn't worry about tasks you say overall is there a bit that likes faces more than objects okay everybody got that that's one thing that's sort of what we've been doing so far is just comparing two levels of one factor that's called a main effect okay in this case a manifest effect of the factor stimulus type all right or we could ask a different question does the response of a region of the brain depend on attention so overall never mind whether it's faces or objects there's like photographs flashing up there does it matter if you're paying attention to those photographs or paying attention to something else okay so for that we compare the average of these two versus the average of those two that would be a main effect of task make sense it's just terminology but it's important to see that we can ask these different questions of a two by two design okay ready with me anybody want to ask me something this one this main effect isn't a very interesting one it's kind of a weird one but you could you could do it right okay um okay so that's yeah main effect of attention or task okay now we could ask as someone else said a moment ago was that you lauren yes if we want to know does the effect of stimulus category depend on attention that's what a two by two design is that is that kind of question that a two by two design enables you to ask so to ask that question really what we would do is essentially look at this row and then that row and then we compare them so we might say how much higher of a response you get than for faces and objects and we get some number in that cell sorry when you're paying attention to them and how much would you get when you're not paying attention to them you're paying attention to the letters and then we could say oh what is it you know how selective is the face response when it's attended versus unattended in other words how do the response to stimuli depend on task okay it's not rocket science but it's important to see how this humble little two by two enables you to ask these very different questions so this question of how the effect of one factor depends on the level you're at with the other factor is called an interaction okay um and it's often like the most interesting kind of question to ask of any kind of data whether it's mri or anything else you could think of it as a difference of differences right or more directly how the effect of one factor depends on the level of the other factor okay in this case the terminology would be we'd be looking at an interaction of stimulus category by task make sense everybody with the program about how this question is different than the two different main effect questions okay to get some practice with this i'm going to have you guys come up here and draw some data okay so we're going to consider um okay we're going to just to get experience with main effects and interactions we're going to consider a main effect of factor x which is an overall effect of x i the difference between condition one versus condition two within x and we're going to consider interactions of factor x and factor y that is how um how the effect of x depends on y and vice versa okay so i'm going to have you guys draw data i need my first volunteer this is not hard uh how do i put this thing up i forgot to check if i have red and black pens hopefully i do if you don't volunteer i'm gonna pick randomly and that could be worse it's not too awful is it carrie unfortunately i remember your name so what up here um okay so um you got an easy one uh this doesn't write very well but it will do okay this is your red pen that's your black pen um okay so we have here the response in red or orange will be the attended case we're looking at a response in the fusiform face area a possible response in this case a pretty unlikely one but never mind so the attended one and there's the unattended one and there's response to objects and faces and what i want you to draw is a pattern of data in which there's no main effect of stimulus type no main effect of attention and no interaction of stimulus type by attention so just put you're going to draw four dots you can do x's and o's or whatever okay so no effect of the stimulus so that means it doesn't matter if what's the base or option uh-huh exactly so i guess you could do that for one to go do that for the attended task first so do that in red you have to really lean on it oh it worked for me sorry we'll have the extremely counterintuitive thing of this is the tension there we go here you go okay perfect no main effect of stimulus type good okay now we've got no main effect of attention so take the blue pen so this is like and no interaction no effect of attention relative to attention no attention that's right no no main effective attention means no difference for attended and unattended okay but stimulus type is important no no we're still it's all the same we're drawing all the same situation yeah exactly it's a little bit of a there you go beautiful nicely done carrie okay so that's kind of a dopey case well done you can sit down yeah okay so we're just starting basic here okay that's what it looks like if you have no main effects and no interactions everything's the same okay all right that's not going to happen if you're in the fusiform face area if you get that there's something wrong with your scan or something went way wrong okay but we're just flushing out the logical possibilities okay i need the next volunteer who's going to do a main effect of stimulus type no main effect of attention and no interaction of stimulus type by attention yes come on up here is it what's your name sorry aquali yes right um great so go ahead and draw that for me i'm just gonna clarify that this what do we do this is unintent wait a minute yeah unattended here okay here you go there's a main effect assuming it's probably easiest if you start yeah start with the tendon there's a main effect of stimulus type great you're in the ffa the faces are going to be higher than the objects and main effect of stimulus type says you're going to get a difference good beautiful well done make sense everyone thank you kwiley um that make sense everyone okay so what would this mean if you got this okay quality you're not quite done so you get that what's that telling you um it tells you that uh it responds to the stimulus but the the attention doesn't make any difference yeah the the the selectivity you get doesn't depend on attention in this case again these are all we're just making up data right we're just considering the different ways the data could come out and what they would tell us everybody got that okay now the plot is going to thicken a little bit now we're going to have a main effect of stimulus and main effective attention and no interaction of stimulus by attention it's going to come up here satalia one up here here you go beautiful thank you everybody see how this is a main effective stimulus type faces are higher than objects a main effect of attention attended is higher than unattended but no interaction the effective stimulus type is the same at each level yeah just something that made it was unclear is the does the tension usually affect the selectivity or the average response these are great questions right now we're just considering the logical possibilities okay we will talk about that later yeah it's a good question you should be wondering yeah okay so um talia tell us if you found that what would that mean um so because like the difference in effect between like intended and unintended like objects and faces is the same like that does show that like a tension like plays an effect and the stimulus vision effect but there's like no interaction between them because like the difference is the same it's like there's these two different things there's face selectivity and then there's just a big like overall if you're looking at stuff you get higher responses and if you're looking at the letters yeah exactly all right one more i need a volunteer david that's not a volunteer i realize it's different than a volunteer but okay so draw me a case where you have a main effect of stimulus a main effect of attention and an interaction of stimulus by attention yeah beautiful so here we have um oh wait actually hang on hang on hang on uh wait a second you've got a main effect of stimulus actually you don't have the main effect of stimulus here did i get rid of that yeah you got rid of that now you have the main effect of attention wait maybe i said oh maybe i told it maybe i said it wrong oh yes you didn't have a okay wait a second oh yeah you're right i think i screwed you up here okay we want a main effect of stimulus yeah yeah and then well let's let's just if we move it a little bit like that exactly okay so don't go away um does everybody see how this is a main effect of stimulus those guys are higher than those guys a main effect of attention the grease got green guys are higher than the blue guys but an interaction like that difference is bigger than that difference okay now don't go away david if you got that what would you conclude about the fusiform case here if you got those data well the ffa if it was like this it depends if not only does it depend on attention but it depends on attention more than uh maybe the object uh detection doesn't depend on attention so much that's right that's what your data show is that the the response to faces is more strongly affected by attention than the response to objects but another way of saying the same thing is to say that the selectivity is greater when you're attending than when you're not attending make sense or the differential response is greater okay great thank you everybody got these basic ideas okay they're pretty rudimentary i don't want to insult your intelligence but i i really i've found that people often don't get main effects in interactions and they're often really the crux of an interesting design is an interaction and keeping it straight from the main effects sometimes takes a little doing okay so let's consider what is the key sign of an interaction oh well we already have a case where often people draw an interaction where the lines cross but they don't need to cross david just showed you a nice interaction where the lines don't cross okay all right okay moving on that was all leftovers that's bad planning oh sorry what oh yes put the thing up good point or down thank you chris um okay let's talk about category selective regions of the visual cortex we've been talking about these all along but it's time to get a little more critical um so first i've been talking about how there's a patch in there that responds pretty selectively to faces there's a patch out there on the lateral surface it responds pretty selectively to bodies and we haven't mentioned it much but next week you'll hear more than you want to hear about a patch smack in the middle there that responds selectively to images of scenes okay so you just look at that and it's really damn near impossible not to wonder what else is lurking in there right what else is in there uh and of course we we wondered that many years ago me and paul downing who did the um the body area paper he was my postdoc at the time and we said well let's just scan people looking at 20 different categories of objects and we put in all kind of silly stuff in there i'm phobic about snakes so i wanted snakes he's phobic about spiders we compromised in our creepiest condition threw them both in there it was kind of sloppy but we had food and plants because we figured those are biologically important we had you know weapons because those aren't tools because those are you know those are important in other ways we had flowers because steve pinker has this line in one of his books saying that a flower is a veritable microfiche of biologically relevant information he hypothesized based on that that people might have special purpose neural machinery for flowers sounded like a crock to me but it's an empirical question so we threw flowers in there for steve pinker okay um and uh and so then we scan people looking at all of these things and we replicated in every subject the uh the existence of selective regions for places faces and bodies and we didn't find anything else none of these other categories produced clear whopping selectivities in systematic regions of the kind that you see in every subject for faces places and bodies now i hasten to say that there are lots of ways with any method to not see something that's actually there you might not have enough statistical power to see it it might be that there's a whole bunch of neurons that do that but they're scattered all over the brain and so they're spatially interleaved with neurons that do other things in which case mri will never see it there are big black holes in mri images where there are artifacts and you can't see anything and if the soul was right there we wouldn't have discovered it yet because we can't see it in our mri images not that i know what the contrast is for the soul you could work on that do you have a question ty yeah i'm just curious did you try it on like cracks yes and we will get to that later and there is absolutely a specialized region for text and we'll talk about that in a few weeks yeah we didn't in this experiment but we and lots of others have in other cases um okay so um so don't take this too seriously my main point is just that you don't find a little patch of brain for any damn thing you test mostly you don't find it right there is some disagreement in the field about the case of tools and hands there are many reports that if you look at pictures of tools or look at pictures of hands you can get a nice little selective blob i have looked at both of those many times i don't see it i don't know what everyone else is on about i'm confused about that i just leave that as in play i don't know but with that exception there's good agreement that faces places and bodies everyone replicates and most of these others no one replicates and so in particular nobody reports selective patches of brain that respond selectively to cars chairs food or lots of other things we have tested snakes by the way and not found anything at least in the cortex so what does that mean that implies connoisseur that some categories are special in the brain okay at least at this crude grain that we can see with functional mri and that seems pretty interesting and important yes i have a nice question about places did you um distinguish between newly made places and naturals we'll get into all of that in excruciating detail next week yeah it doesn't really make much of a difference it likes all of those things yeah um okay so i've been going around for 20 years saying see these these these categories are really special in the brain and the mind and that's what we're getting from this and that's deep and fundamental it's telling something us something about you know who we are as human beings or whatever sometimes i go off the deep end with huge claims okay but not everybody buys this and so what i want to do is allude briefly to the general kinds of ways you could argue against this and then talk in some detail about one main one okay so ongoing controversies this view here is highly caricaturized and this is actually not right right the brain doesn't have completely discreet little regions it's a mucky biological system and if you actually look at the part the face selective regions they have ratty edges and little kind of archipelagos of you know sub-blobs and stuff it's kind of a bit of a mess right there's a general cluster in that vicinity in most subjects but it isn't always a discrete blob unless you blur your data you take any any data and blur it enough it looks nice and clean but if you want to know the actual native form in the brain on blurred it's kind of mucky okay so one could react to that in different ways my reaction to that is like what do you expect it's a biological system does it really need to be perfectly you know perfectly oval shaped with a perfectly sharp edge i don't really care if it if it's interleaved with other stuff around the edges but people react different ways and one kind of important alternative view is look how do we know these that these are really you know things in the brain i'm talking about them as things you know pieces parts of brain and mind right and you know maybe they're just kind of peaks in a broader landscape of responses across the cortex that are fluctuating and empirically that's true there isn't just like one butte and then nothing else in the cortex around it there's some kind of profile right so it's a bit of a judgment call how excited you want to be about a big peak in a fluctuating background yeah and so there's much discussion about that is it really just a peak in a broader spatial organization and if so what is that broader spatial organization all about right it just pushes that question back it says we're wrong to think about discrete things but that still leaves many mysteries about what that continuous gradient is okay so that's kind of one line of response which i think is completely legitimate um any sort of version of that it kind of blurs into this next view which we've talked about a little bit and that is to what extent can these things if i'm calling them things be accounted for just by their perceptual features so we've grappled with that in a number of ways so far one of the first things we asked about the face area is is it just responding to curvy stuff right or round things or whatever right and so there are many lines of work where me and many other people have asked that question and for the most part the answer seems to be there's somewhat you know there's some featural selectivities in these regions but probably not enough to account for their category selectivity but that that one too was still in play and there's this dude in english in england who publishes like several papers a year saying like you know no this thing isn't category selective it's just that um i was going to assign i'm going to try to assign one of his papers to you because i want you to expose you to alternate views but i haven't yet um i haven't yet taught you the key methods you need for that paper anyway so there's room for debate in that question as well then there's just a continuum of okay exactly how selective are these regions like okay i'm excited if a face area responds like this to faces and like that to objects but hey it responds like that to objects is that selective enough you know so there's a lot of debate about what that means okay so there's a lot of room to push back on the simple-minded story i've been serving up to you guys but what i want to do next is talk about what i take to be the most smart and serious challenge which is somewhat different from all of these okay and this comes from a guy up at dartmouth named jim hacksby who published the paper that was assigned for today and i intended for you to like struggle with it a little bit and try to understand it but if you didn't understand it fully i'm going to talk about it here and hopefully that'll make it more intelligible okay so here's the big idea that hacks be there's many ideas in that paper but the part of it that's most relevant to us for now is the following even if the fusiform face area responds weakly to chairs and cars for in contrast with strong response to faces that doesn't mean that it doesn't hold information about chairs and cars okay so all along i've been just talking about one dimension does it respond like this or like that and that's gotten us pretty far but the essence of hacksby's idea is that we should care not just about the overall mean response we should ask if there's information present in the pattern of response across voxels okay and his point is that even if there's a low mean response you could still have information in the pattern across voxels even if it averages to some low number okay and that pattern of information could enable you to distinguish different categories all right so let's get very particular so how exactly would you tell so here's what hacksby did essentially or here's my here's the subset of the assigned paper that's relevant to the current question if we want to know does the fusiform face area hold information about cars and chairs thereby arguing against its selectivity for for faces right i mean we should care about information in the brain not just magnitude of response right if the brain is an information processing system we care what information the parts contain not just house how much the neurons are firing okay all right so if we want to know this here's what you can do here's a version of what hacksbee did you scan subjects while they're looking at chairs and cars you've localized the fusiform face area so you know where it is okay so now you get the response this is highly schematic this is an idealized version of the cortical surface remember the cortex is a surface so we can mathematically unfold it and look at the magnitude of response of each voxel in the ffa ffa isn't square but we're idealizing it here okay everybody get how that could be a pattern of response across voxels in the ffa when the subject looks at chairs okay and maybe you have some other pattern when the subject is looking at cars now certainly if if the pattern when they're looking at faces all of these bars would be much higher but our point is that even if these are low they're different across voxels okay so that's step one so then what hacksb says is you do the same thing in the same subject you do it again hopefully in the same scanning session and you get another pattern like this and this okay now here's the key question if there is systematic uh if those patterns are systematic for chairs and systematically different for cars then there is information in that region about the difference between chairs and cars okay chairs and cars aren't faces so that's an important challenge to my story about how that region only does faces okay so how do you measure that well there's lots of ways um haxby's is the lowest tech and most intuitive he just says let's look at the similarity of this pattern to that pattern repeated measures on cars sorry chairs same subject chairs on the even runs and chairs on the odd runs by the way why do you split your data like this rather than like this okay he does eight runs we could take the first half of the runs put those data here in the second half and put them over there or we could take the data like this and take even runs and odd runs why is even an odd better than first half second half yeah uh i guess it doesn't allow like the subjects to get used to like one particular one particular thing one after another well they're doing the same thing it's all the same data it's just how you analyze it i'm not sure i think it's probably easier to compare between one time and one face and the other it is you can actually do it either way it's like you scan these eight runs here they are you can do that i don't know if you can see what i'm doing here with my whole crew or you can do this why is this better than this yeah as well um yeah maybe they fell asleep halfway through the scan okay then if you do like this the odd and even are going to be better compared to each other than first half second half right makes sense okay it's another version of why you do things within subjects it's the same kind of argument yeah okay so he splits into even and odd and so you ask how similar are they within a category within chairs and within cars you get two different correlation values just how similar are those patterns you get an r value and we compare that with how similar the patterns are between chairs even to cars odd and chairs and cars even to chairs odd okay and so the key question you ask if there's information about chairs and cars in this pattern of responses then the correlations will be higher within category than between category in other words two different times you scan looking at chairs those patterns are more similar than chairs are to cars make sense i mean it's pretty it's pretty basic but it's one of these things that's simple and yet subtle at the same time does everybody get this okay so you just do these repeated measures and you look at these pattern of correlations and if the the patterns are more similar or more correlated within a category than between categories then you have information in that pattern that enables you to distinguish those categories yeah okay so that's what hacksbee did um yes so what was this information doubles so it would be difficult to look at nothing really um well wait a second oh that's the same okay that's essentially like this it's just that since we're going from even to odd with the win within case we're going to go even to odd in the between case you could have done it this way but then yeah okay okay so um that's the method what does hacks be find well you guys should all look at the paper some more so you get a sense of it it's actually really nicely written even though it's dense those science papers are very dense but basically here's what happened so in that paper he says yes he can distinguish between cars and chairs in the ffa and therefore to quote from his paper regions such as the ffa notice the square scare quotes he's put in there to diss me i i hear you i hear you jim okay regions such as the ffa are not dedicated to representing only human faces rather they're part of a more extended representation for all objects them's fighting words right everybody see how this is a serious challenge with a very elegant method okay so when i first read that paper it's like huh okay i'm paying attention but he didn't do everything right i didn't like the way he defined the ffa i found a million reasons to dis it and i ran my own version and in my paper that we published we did we could not discriminate those we said you can we can okay you did it wrong we did it right then a few years later jim publishes a paper with a collaborator in which they reanalyze their old data and said actually you really can't discriminate it very well it was significantly above chance but really lousy and so they concluded preferred regions for faces and houses that is regions that respond preferentially to faces or houses are not well suited to object classifications that do not involve phases in houses respectively but i didn't get to gloat because right about the same time we were redoing our experiments at higher resolution and actually we could distinguish two different non-faces in the fusiform face area so that was like the little drama that unfolded and so the current status is um yes like you really can discriminate two different non-faced categories within the fusiform face area um even if you do it right even if i do it right and i don't want that result and i do it right i can get that result okay so that's true empirically the ability to discriminate is is feeble it's not very strong but it's significantly greater than chance so does that mean i'm toast and i wasted the last few weeks telling you guys a bunch of bs that has been disproven and i should not have been telling you yeah david isn't it it's kind of like saying that you could use a vending machine like a clock and then asking the question then what is this thing for he said well it's obviously the office clock that's a great a great analogy i love that absolutely absolutely so now to me the central question and here's another another example that i think is exactly like that but but even more on point and that is that there are deep nets that people have trained on faces vgg face it's really good at face recognition it has only ever seen faces it has only been trained on faces that is all it's about and if you feed it chairs or whatever i have chairs and cars it can discriminate between chairs and cars so even if you have this perfect representation that's only been trained on faces that has only evolved if it evolved we'll get to that later to deal with faces it can still give you a somewhat different response to chairs and cars and that doesn't mean that that's what it's doing right so i think this is a really important um challenge but i think centrally crucially what we really need to be thinking about maybe quilly has a contribution yeah so like if it's only been trained on faces and you feed it and share um like what's it out for them um okay so you know it's just it's just a bunch of feed-forward layers that are connected with you know boatloads of units at each thing and connected in the systematic pattern and once you train it up you can feed it any stimulus and you can collect the response at the top okay so even though it is designed for and only been trained on faces you can feed it non-faces and get the response out this at the top and see but that is not the not the category not not the top layer where it says that's joe or that's bob but just before that layer there's a whole bunch of units that have some representation distributed across units you can take that and try to read it out and ask if there's information there okay i'm not giving you all the details of how you do that but hopefully you can get at least suggest and later in the semester katherine dobbs is going to tell you more about how you do all this kind of stuff okay i spent a lot of time in the last last few weeks talking about a key difference between two different kinds of methods one set of methods that allows this kind of inference and another set of methods that allows that kind of inference i'm trying to give you guys a clue here um actually what i'm going to do is let you percolate on this i don't think this is obvious i worried about this for years i think there are many answers to it it's not cut and dried i will say i have already presented to you at least two different lines of work that argue that provide an important counter argument to this one of the people who gave me crappy teaching evaluations last year said she told us about counter arguments and then made us tell her how they could in fact after all be consistent with her data i thought that was weird i was just i was just trying to teach people to think about data but anyway i won't make you do that because somebody didn't like that before but you can think about it and we'll talk later and actually i'm gonna i it's actually good to think about and we will come back to it um but i want to get on with the rest okay i mean i mention all this because it is an important challenge yeah and i'm wondering if like objects are not processed in ffa they must be processed somewhere else totally so something else totally i had a whole piece of this lecture on that and then i thought for once i'm not going to go over my time so i'm not going to talk about that but um yeah there are there there's remember there's all those other bits of cortex i've just identified a few particular ones there's lots of cortex in between and the simple statement is there's a lot of nearby cortex near the ffa in the ppa that seems to respond generically to object shape and the first pass guess is that there's like a general purpose visual machine in there in addition to some more specialized ones but i'm going to not say more at the moment and i'll just say actually if you read it you may read it in papers it's sometimes called lo or loc that's kind of shape selective region which is arguably the kind of generic let's process everything else system okay only if it's clarification questions okay ask it no i was just wondering which came first this world called transcripts sorry this work or oh ah good question uh the transcranial stuff has actually been going on for a long time but the relevant kind that i talked to you about is more recent and you're right it is one of the very strong answers to this kind of critique there's several actually i've told you about three so far answers to this but think about it okay so um okay so what we're going to do now is talk about not just this particular use of this method to ask a serious question about the selectivity of regions in the ventral visual pathway now what i'm going to do is argue that actually i think i just said all of this what what hacksbee has given us is also a method to ask what information is present in this little patch of the brain and that's an awesome thing so let's go on and talk about that let's talk about neural decoding with functional mri so that was an instance of it but i'm going to cash it out in another way more generally so let's take the case where there's a person with a patch of their brain and a pattern of response across fossils in that patch of their brain when they look at some stimulus let's suppose you're given this and you want to know what was that person looking at to produce that pattern right what was the stimulus out in the world that produced that pattern can you do that so more generally can you read the mind with functional mri or maybe a little more honestly can you at least tell what the person saw from their pattern of brain response okay everybody get the question here okay how can we try this well they're all variations of that hacksbee method that i just told you about okay but let's walk through this so the first thing you need is you have this pattern and you're trying to figure out what stimulus produced that pattern in this and this person's in that part of this person's brain well you need a decoder you need to know what those voxels respond like when they look at different things where you know the answer okay so what you do is you scan the subject on a bunch of different conditions to get your decoder and then you can take your unknown data and compare it to those decoder data okay so in particular you have to train your decoder so you scan the person looking at say shoes and you get pattern you scan them looking at cats and you get a pattern maybe you scan them looking at five ten a hundred other things probably not a hundred you don't have enough scan time but some number of things and so now you know you know this is how those voxels respond when the person looks at shoes and this is how those voxels respond when they look at cats okay now you test your decoder with your mystery pattern now you have your mystery unknown pattern you want to know was that shoes or cats okay well you can just look what is it more similar to all the methods are versions of that they're just fancy mathematical versions of that so what do you think that pattern what produced that pattern shoes it's more similar to the shoe pattern exactly you guys just did neural decoding okay so that's exactly how you do this there are all kinds of ways of doing this from just saying is this more correlated with that than that that's hacksby's version or you can put a whole big fancy machine learning rigmarole in there to do pattern classification because that is after all what machine learning is so awesome at is pattern classification and this is just a straightforward pattern classification task train on these test on that is that is that sort of intuitive what we're doing here okay so that's the agenda that's the logic of how we do this um and so does that work well a little bit but you don't have to worry uh because at least at the moment um because there are a million ways about 10 years ago i was getting called up by um legal types all the time because are there are people going to use people going to detect lies with functional mri and i thought this was a total crack and i was going around giving talks on all the reasons why nobody has to worry that um they're going to be compelled to testify by having being shoved in a scanner and have their brains read i mean it's not a totally stupid thing to worry about but lest anybody uh i don't think this will happen but lest anybody try to read your mind against your will while you're in an mri scanner you can totally foil it in any number of ways one move your head two if they've got your head bolted down move your tongue you totally mess up your whole signal if you move your tongue okay three do mental arithmetic you can totally shut down whatever they're trying to do if you think about something else anyway so we don't need to worry about it it's not good for insidious kind of legal efforts but it is pretty good for science sometimes okay so there's lots of versions of neural decoding with functional mri so we've been talking so far about decoding functional mri patterns of response across voxels um that's called mvpa multiple voxel pattern analysis you don't need to memorize that but when you see mvpa in a paper this is what it's talking about okay but you can also do it with lots of other kinds of neural data oh sorry within mvpa you can ask it of a particular roi in the brain region of interest like v1 or the face area or the body area or something else but you can also apply it to the whole damn pile of data from the whole brain and say can i tell what this person is thinking by looking at their whole brain okay beyond functional mri you can apply it to lots of other kinds of data so you can do monkey neurophysiology as we discussed briefly last time where you have actual firing rates from individual neurons and you can look at the response acro you know the response of each stimulus class to each neuron in a region of the brain and you can do the same deal running a pattern classifier or a simple correlation method on the pattern of response across neurons rather than voxels everybody see how that's sort of the same deal just better okay or you can do magnetoencephalography as we talked about stick your head in the big expensive hair dryer collect magnetic signals from all around the head 300 channels and now those magnetic signals are changing over time so the cool thing about neural decoding with with meg is you can say okay let's take the data from just exactly 80 milliseconds after the stimulus flashed on and let's ask what can you decode then what can you decode at 100 milliseconds 120 milliseconds you can see the growth of information over time as neural information processing proceeds by by running the decoder separately at each time point okay i'm going to try to squeeze into a future lecture more talk about that because i think it's cool and we're doing a lot of it in my lab right now does everybody get the gist of this at least okay okay so that gives you the time course of information extraction okay similarly there's lots of different decoding methods you can use as i mentioned the kind of simple low-tech hax-b style correlations or you can use something called linear support vector machines or various other kinds of fancy machine learning math okay to to to do those classifiers okay let's take do i have time to do this i'm going to skip this yeah we're going to skip that it's cool but we're going to cut to the oh i don't know we'll do it all we've got time all right um okay so pointed now that i waste all that time deciding whether we had time okay we're going to compare how well this works when you do it on mri versus how well it works when you do it on neurons and monkey brains okay so um so there was a beautiful paper a few years ago that looked at this so the question is here are these face patches and monkeys that i told you about and that david leopold will be talking about at four o'clock today and so question is this particular one am one of the nice face patches up there these guys wanted to know what information is represented up there in face patch am is there information about different individual face identities can you use it to decode which face the monkey saw okay and so they did this experiment two ways one they did monkey neurophysiology they recorded from 167 different individual neurons in that region and for each neuron they measured its response to five different faces okay in another condition they popped the very same monkeys in the scanner and they scanned them with functional mri and they did the same experiment and they measured the magnitude of response of each of 100 voxels in that same patch of brain in that same monkey and they got the mri response to each of those hundred voxels and for each voxel to each of those five phases everybody get that this is asking the same question how well can you decode face identity from individual neurons or from functional mri in the same animal and the answer is damn depressing the answer is you can decode identity really well from neurophysiology and you can't do it worth a damn with functional mri big bummer yeah so that's a drag it's just what it is presumably remember each mri voxel has hundreds of thousands of neurons in it so the real miracle is that we ever see anything at all and when we can't see the neural code with the resolution that we need to to tell whether it's got information about face identity that's just because we're averaging over so many neurons okay that was my lament at the end of the lecture um on monday that there are so many limitations in um human methods and here's one of the key ones okay um what are the implications it sucks anyway okay i want to get one more idea out uh and that is yeah to fmri or does it also translate to vdgs oeg is much worse much worse oh my god yeah yeah um yeah the only thing that might be better someday is intra cranial recording but even there you usually don't get enough electrodes so you need these very rare cases where you have very high density grids of intracranial electrodes that some surgeon has decided by chance to put on a part of the region a part of the brain where you happen to have hypotheses and you happen to be incredibly lucky to test your your hypothesis and that's very rare did you have a question clearly no okay um okay so i've been talking about neural decoding and that's a way of asking what information is present in this batch of neurons or this bunch of voxels and that's a really deep question to ask for cognitive science because we're interested in information processing and we want to know what's represented in each region it's really the crux of the matter in cognitive neuroscience but we can also use it to ask in a richer way about the nature of that information in each region okay so suppose we want to know what exactly is represented there we want to know not just that it can distinguish shoes from cats okay that's okay but suppose we want to know how is it doing shoes versus cats does it just know for example the shoes are elongated this way and cats are roundish and that's all it's using to do its classification in other words it's not really shoes and cats it's this versus that or something right if we want to know how abstract those representations are or how invariant they are to variations in viewing conditions then we can do the following cool thing we can train the decoder on one set of stimuli and test on a different kind of stimuli okay so for example we can ask are those representations of shoes or are there representations of shoes that are invariant to for example color and viewpoint chosen just because that was the nice shoe i could find my was searching hour ago okay so if we train on these and test on that is that going to work is it going to know that this is the same kind of thing as that if it does what have we learned about that shoe representation yeah it's kind of generalizable it's very generalizable yeah different perspective totally totally it's not just this it's something closer to shoeness we don't know exactly how far it is that until we test more conditions but exactly we've shown that it's really abstract and generalizable that makes it more useful that makes it more cognitively interesting we could even go off the deep end and say okay is it the concept of a shoe we could scan people reading the word shoe and ask is that going to work okay anja's doing experiments like that there's various people looking at this kind of thing um and so you can ask at any level how general or invariant is that representation okay so neural decoders are not just like gimmicks to try to say oh i can see and for mec and read out you know what this person saw there are powerful methods in science to characterize mental representations and to characterize how abstract they are you |
MIT_913_The_Human_Brain_Spring_2019 | 10_Development_Nature_Nurture_I.txt | [DIGITAL EFFECTS] NANCY KANWISHER: So let's start with one of the deepest questions humans have ever asked themselves. We're not messing around in this class; we're going for it. And one of the deepest questions is, where does knowledge come from? And as you'll know, if you've taken even a teeny bit of philosophy or read even a teeny bit, you know that some of the classic views in Western philosophy-- especially the empiricists, Locke and Hume-- argue that all knowledge comes from experience, right? On the other hand, there are a number of other schools of thought in Western philosophy, of which a dominant figure is Immanuel Kant, who argued that experience alone is not enough. You can't just have experience and figure out all the stuff we have figured out. And so he argued that there has to be what he called "a priori conditions" of cognition, which can't be derived from experience themselves, but have to be given prior to it, OK? So you have to have to build some structure into a mind or brain to get it off the ground. You can't just start with absolutely nothing and get anywhere. OK, and he also argued that one of the key elements of this a priori structure that you have to build in was space and time-- organizing principles of cognition and thinking. And so in his version of it, space is nothing but the form of all appearances of outer sense, and it can be given prior to all actual perceptions and so exist in the mind a priori, and can contain, prior to all experience, principles which determine the relations of these objects. OK, well, is that just empty philosophical hot air? It's kind of hard to understand exactly what he means. You're actually have to go spend a good deal of time with reading him to make any sense of it-- or cheat and get your friends to tell you, as I do. But no, I'll argue it's not just empty philosophical hot air-- that these are, in some important sense, empirical questions. And there are empirical questions that our field addresses very directly. And so on Wednesday, we'll talk about whether your representations of space in your head are innate or not. It's pretty much directly what Kant is talking about-- or the modern version of what he was talking about. And today, we'll talk about which aspects of the brain are innate and which our learned, OK? That's the agenda. OK, so this little kind of Easter egg brain here very schematically shows you some of the regions that we've been talking about in this class so far, with regions that are, to varying degrees, specialized for processing things like shape, and color, and motion, and faces, and places, and bodies-- visually processing all of these things in approximately those locations. And as I've mentioned, these regions are present in approximately the same location-- with some individual variability-- in pretty much every normal person. One of my lab members says, you keep saying that, and it's just not true. There's some percent of subjects who just don't show these things. He's kind of right, OK. So maybe, I don't know, 5%, 10% of subjects, you wouldn't see some of these things. And we've never actually done the serious work of bringing those subjects back, scanning the hell out of them, and finding out whether they were just asleep in the scanner or it was a bad scanner day, or whatever it was. I bet they all have them, and it's just sometimes you don't see it, but I'm trying to be a little more honest. OK, but you just look at this. Given this very schematic version of it, you say, how would you build this system? How would you start with an embryo and build into a genome, or build into whatever experience is going to happen to this developing organism? How would it end up with this very particular structure, with those things in approximately the same place-- or at least the same relative positions-- in all subjects? The face bits are always lateral to the color bits. The place bits are medial to the color bits. The shape bits are out on the lateral surface. It's like always like that. How do you build a system like that? I find it hard not to immediately think, well, some aspect of this must be innate, or how would it be so damn similar in each individual, right? But it's not the only hypothesis. Some big part of it-- even if some aspect of this is innate, some big part of it may also be learned or derived from experience, OK? So what do you guys think? Do you think the fact that these structures are in systematically the same place across subjects mean you have to build in all that stuff, somehow figure out how to get a bunch of As and Ts and Gs and Cs in your DNA to give you a blueprint for how to build that structure? What do you think? Yeah? AUDIENCE: I mean, it's a combination, but it's hard to, then, think about how that's involved in [INAUDIBLE] generation and then kind of become more innate? NANCY KANWISHER: Yeah, so to some extent, experience-- what I mean here is learn from experience within each individual. You could argue that "innate" really means "learned through the experience of our ancestors, and hence wired into the DNA," yeah. Anyway, I find this not an obvious question, and so we'll talk about what the data say here. So first of all, we're going to do some very basic facts about brain development, just to get the picture of what we're talking about physically with the development of brains. So we can ask, what is present at birth? And so it turns out that most of the neurons in the adult brain are generated before birth, OK? So most of the actual neurons are generated early. You're not making a whole lot more after birth-- a few, but not a lot. Further, the current view is that most of the long-range connections-- that means like a connection between this part and that part of the brain-- are also present at birth, OK? Nonetheless, even though a lot of stuff is present at birth, a lot of stuff changes in the first couple of years of life. Most obviously, the brain doubles in volume in the first year, from a two-week-old, to a one-year-old, to a two-year-old. The cortical thickness-- you can see here the dark stuff, which is the gray matter out there-- increases sharply between years one and two. But also, the complexity of each individual neuron increases dramatically in the first few years of life. So here's a schematic picture of a piece of gray matter here. We have some number of neurons here with a few little processes and a few connections. And over the first couple of years of life, those connections get much more dense, and the neurons get much more complex. OK, and the final thing that really matters early on in development is that myelination happens rapidly in the first few years. And remember, myelin-- this is a little reminder-- neuron with that yellow stuff, which is a bunch of cells that wrap around the axons, the long processes of a neuron. And that myelin sheath builds up a lot over the first couple of years. And that's important, because the myelin sheath enables those neurons to send their signals faster down their axons, OK? OK, and this is just a picture of different-- of a vertical slice like this through the anatomy of infants of different ages, from 107 days up to about a year. And the colored stuff in the middle is degree of myelination, which you can see with various kinds of anatomical scans. You can see it starts at 107 days with a tiny little bit in the middle, and it gets more and more myelinated and moves from center to periphery over the first year of life. So all those fiber pathways are getting accelerated as they get wrapped with myelin and hence sped up. OK, all right, so bottom line is most neurons and long-range connections are in place at birth, but development continues rapidly in the first two years, especially increasing complexity of neurons and synapses and myelination of long-range connections and white matter, OK? So it's just basic anatomy, nothing functional yet. OK, now we're going to consider in some detail the case of face perception, not really because that's what I work on-- or used to work on, mostly-- but just because there's a very rich set of data where people have grappled with this question in the case of face perception. Next time, we'll talk about the navigation network and reorientation-- what parts of that system might be innate and learned. So I'll just say right out of the beginning that this is an extremely active area, where every time I turn around, another paper comes out that contradicts a previously-published finding. And so that makes it fun, but it means there isn't going to be some really tight, perfect story here. And I'd rather take you guys straight to the cutting edge, even though it's kind of a mess, than give you a nicely packaged but surely wrong picture, OK? Because again, I think what matters most in this area is how do you go about answering these questions, rather than what is the current state of the thoughts about the answers. OK, so how are we going to think about, how does face perception develop? Well just to get started, I'm going to show you a very brief movie of a 72-hour-old monkey, and see what you think. He's sleepy. He's pretty interested in that face. And watch now. Hmm. [LAUGHS] Pretty cute, huh? So what do you think? What does this tell us about face perception? Yeah? AUDIENCE: Did they try just moving anything in front of him? NANCY KANWISHER: Good question. Good for you. Quily, is that right? AUDIENCE: "Quile-y" NANCY KANWISHER: "Quile-y", all right. Yes, so Quiley asked, did they try moving just anything in front of him? Absolutely the right question. So that monkey seems pretty interested in that face, but a face is a moving thing. Motion is very salient to young primates-- humans, and monkeys, and many others, absolutely. What else did you see in here? Yeah. AUDIENCE: It started imitating [INAUDIBLE].. NANCY KANWISHER: Yeah, kind of. I mean, the person-- the adult human there-- was moving their mouth open like this, and the monkey was doing something with their mouth. So what would that require? Sorry? AUDIENCE: I like, I have another. Also, was the monkey allowed to touch its face before this? NANCY KANWISHER: Yeah, good question. Good question. 72 hours is damned early, but it's not zero experience, right? So who knows what they've managed to pick up that early. There are actually studies in humans, which I'm hoping Heather knows better than me. Those Andy Melzoff things. How young are those humans? Those are like first hour. AUDIENCE: Yeah, [INAUDIBLE]. NANCY KANWISHER: I think it's a-- AUDIENCE: [INAUDIBLE] NANCY KANWISHER: So there are studies in humans where you can show versions of that, with newborn infants copying-- the experimenter comes up and sticks their tongue out at the infant, and the infant does that back, kinda sorta. Certainly within the first two days, maybe even earlier, OK? OK, so it's very suggestive. It's tantalizing, but we need controlled conditions. It doesn't tell us everything we need to know. OK, so if we think about it, there are ends of the hypothesis space about how all of this could go. As Alana mentioned, everything is both genes and experience. That's true, but there are very, very importantly different ways in which genes and experience can act together-- some in which a big part of the heft of what the adult form has might be built in, and other stories where most of the structure comes from experience. So just because everything is both doesn't mean we shouldn't flesh out exactly what comes from what. So on one end of the spectrum, you might imagine that there's some very, very rudimentary precursor that has to be built in, plus a learning mechanism, OK? Or a bunch of rudimentary precursors, which are just there to get the system to learn in the right way, OK? And so we'll talk shortly about the idea that there might be some kind of innate template for faces that gets monkeys and humans to look at faces. And then, the idea is once you get them to look at a face, then experience can take over from there and do the rest. But you've got to get them to collect the right input. And there's lots of interesting computational work going on now where people are using various computational models to say, what do we have to build into, say, a convolutional neural network or some other kind of computational model to get it to do some complicated thing? I just came from a job talk the last hour-- really amazing talk-- where the guy is showing that if you build in, basically, curiosity early on in a network, you get much more general learners than if you build in a bunch of goals for a developing network to seek. anyway it's a very active area, and the paper that I just decided to assign to you guys, just kind of skim it and get the gist. The basic idea-- this is from Shimon Ullman, who is a very deep thinker in this field. And he argues that hands are very important in infants. Faces are important, but so are hands, because hands do stuff. And we're social primates, and we want to learn from other social primates like our parents. And watching their hands is extremely informative. Whatever they're doing with their hands is probably stuff we need to learn about. And further, we need to know where they're looking, right? So gaze perception. I think I did this demo before. If I'm talking to you guys, and I start doing that, it's really hard, even though you know I'm just faking you out, not to have your attention pulled over there, and infants need to learn that as well. So Shimon Ullman's basic idea is that you can start with an extremely rudimentary system, and all you have to build in is this idea that he calls "mover," right? So the idea is that if you look in a whole set of, say, YouTube videos, and you just look for patches of the image that are moving, that's no good. It won't be a hand. It might be a whole animal, or a face, or something else. But if you look in YouTube videos, a proxy for natural experience-- it's OK; it's not perfect, but it's something- you look for a patch of the image that moves over and then causes another previously-stationary image patch to move. That's what happens when we pick stuff up, OK? And so his idea is you can build in this extremely simple thing-- Mover, which is a very simple visual algorithm, can find image patches and move over and cause another image patch-- or then the two image patches move together. And Mover will enable you to identify hands in images pretty well. He looks in YouTube videos and shows that it's really good at picking out hands. And then, further, once you've picked out hands, that's a really important teaching signal in teaching you to read gaze. Because often, people look at their hands before they do things with them, yeah? So the idea is there's a very active ferment now in computational modeling saying, how can we start with just the most rudimentary, minimalist stuff that has to be built in, and then build on experience to get the rest from there? Is that idea clear? It's worth reading that paper, though. It's beautifully written. He's brilliant. OK, so that's one end of the spectrum. Nobody thinks that you learn absolutely everything from experience. You've got to build in something. Plus, we know all those neurons are there at birth. And so the idea is some version-- the minimalist nativist view says you build in a few very rudimentary things, and they're enough to bootstrap learning. OK, on the other end of the spectrum, you might think-- and many have proposed-- that we're born with a nearly adult-like system that only needs fine-tuning from experience, right? Nobody thinks that zero experience is necessary. That would be kind of crazy, or implausible. But on the other extreme, this view is that most of the stuff is built-in. OK, everybody get the theoretical space here that we're considering? OK, so what kind of data can constrain these questions? Well, one obvious question is, what is present at birth? What is the initial state-- or as close as we can get to it? Then we can ask, how does the system change over time from birth onward? And then we can ask, what are the causal roles of experience and biological maturation in that change after birth? So that's the whole set of questions we'd need to answer to understand how development works. And a very central-- if not the central-- challenge of development is that experience and maturation are deeply confounded as you look from birth onward, right? So five-year-olds are both more mature-- they've had more time for their biological systems to wire themselves up, including their bodies, and their brains, and the whole bit-- and maybe some of that is just on a maturation kind of autopilot. But they've also had a lot more experience. So one of the central challenges of development is trying to figure out how those later stages-- like two months old, one year old, 10 years old-- how those changes that happen between birth and those stages can-- how can we tease apart which of that came from just maturation and which came from experience? All right, OK. Importantly, things that happen well after birth need not be learned, right? So think about puberty. Puberty is going to happen around 10, 11, 12. And OK, you've got to eat and have some basic inputs to your system, but it's pretty much going to happen. It's not a product of what you were taught or the particular information that landed on your sensory receptors. I'm sure there's some obscure influences that I don't know about, but mostly, it's on a developmental autopilot. It's just going to happen. OK, so keep in mind-- this is really important-- that things that happen well after birth aren't necessarily learned. It might be just maturation that's continuing, right? OK, just as being 5 feet tall versus a foot and 1/2 tall isn't really learned. It's just a maturation program that unfolds. OK, so we can ask these three questions both behaviorally and naturally. And ultimately, we want them to tell the same story. When I said there's some chaos in this field right now, I mean that basically, they're not converging very well yet, but that's fun-- sort of. [LAUGHS] Sometimes it's aggravating, but mostly, it's fun. OK, so let's start with some behavioral data. So let's consider the initial state of face perception in newborns. OK, so we can ask, what kind of perceptual, face perceptual abilities are present in newborns? And we can ask whether they can detect a face-- that is, discriminate a face from a non-face, whether it's a body, or an object, or something else. We can ask about preferred attention to faces. Do they, do newborns want to look at faces more than non-faces? We can ask about the ability to recognize faces, to discriminate one face from another, OK? And we can ask about the ability to recognize faces across image changes. So we spent a lot of time in the first few lectures talking about the central problem of invariance in vision-- about, how do you know that this image that you're looking at here is the same person as that image, even though those are very different images? And actually, this image on your retina right now is more different than this image on your retina than if we got one of you and came up-- had you come up here and had you look forward. So the image changes that result from a change in orientation are greater than the image changes that result from a change in identity. So it's a big computational challenge. When is that solved? And then, there are these so-called signatures of face perception that we've talked about a little bit-- for example, the inversion effect. Recall the inversion effect is larger in magnitude for faces than non-faces. So we can ask when those things develop. OK, so let's start with face detection and preferred attention to faces. Well, so classic studies from the early '90s, and actually, some of them going back to the '70s, did the following very low-tech thing-- a low-tech drawing of a low-tech experiment. You take a newborn infant. In this case, they're less than an hour old, right? You've got to set up in maternity wards. You want the data, that's what you do. Of course, you have to ask the parents and all of that. But then, you take this infant and you sit them on a person's lap with a video camera overhead, and you move different objects over the infant's head, OK? And the different objects that were moved, in this case, were patterns that were drawn on this paddle that's moved over the infant's head. And the pattern could be a schematic face like that, a scrambled schematic face like that, and a blank with nothing in it. And what you measure is, how far does the infant turn their head or their eyes following that paddle as you move it over them. OK, nice low-tech measure. And what you find is they turn their heads and their eyes farther when it's an actual schematic face than when it's a scrambled schematic face or a blank, within an hour of birth. Then you can still say, well, their parents probably smiled at them quickly before they were snatched away to do the experiment, so they had some face experience, but boy, not a whole lot. And this is a very abstract face here. So this has long been taken as one of the key bits of evidence that something seems likely to be innate about faces, OK? But now, what needs to be innate for that? And it's a bizarre thing, where this happens in the first two months of life and goes away. And there's a lot of consideration of what that means. Maybe the first two months is enough to bootstrap learning in the way I was just talking about-- bootstrapping, getting attention to the right places. But there's also a huge literature on this phenomenon where there's a big debate about exactly how simple those cues need to be. So people have done many variations of this and one dominant story is that all you need is a pattern that has more stuff on the top than on the bottom, OK? And that's enough that infants will follow this more than that. And the idea is that in the visual environment of an infant, that's sufficient to pick out faces. So there's been pushback against this view as well. It's probably a little more complicated than that. We won't go down the rabbit hole of all those details, but whatever it is, it's pretty simple. So this is another example of what I was mentioning before with the Ullman case. This is a case where it may be possible to build in something pretty basic-- a pretty basic template-- and then let learning take it from there. Make sense? If the infants are looking at faces, then they can use some kind of synaptic plasticity, whatever, and learn from their experience to discriminate one face from another. OK, so these things are present within a day or two. What about discrimination of individual identity? First problem, how are we going to be able to tell what a newborn can see? And so I didn't want you guys to be to thrown by this method in the last assignment, so I told you where there's a version of the explanation I'm just going to give. So if you already watched that, my apologies. You can read your email for a minute. So the classic experiment-- a classic experiment-- that enabled us to really ask how a newborn, non-verbal infant, what they see in the world, is done by Kellman and Spelke. Liz Spelke up at Harvard was at the forefront of getting this method to really tell us a huge great deal about what infants see and understand about the world. And this method that I'm about to show to you has been the basis of what's sometimes called "The Infancy Revolution," which is basically the insight that, actually, infants know a lot. Their perceptual systems are really sophisticated. They know about physics. They know all kinds of social stuff. Within a few months of life, they know a lot. And that's been a radical change in our understanding of development based on just behavioral work. So here's the method. OK, so what Spelke did-- I always forget to bring the demo. Hang on one moment. We don't need much. OK, so she showed infants stuff like this, OK? The two hands are not there. You just arranged to see this, OK? So even if you hadn't seen me, imagine if you hadn't seen me pick up the phone and the pen, and you didn't already know what they were, and you're seeing this, OK? That's what they see, OK? So now, the question is, when infants see that, do they think that that's this-- thing behind a rectangle-- or do they think it's two separate bits moving behind the rectangle? It could be two separate bits moving together, right? Everybody get the question? OK, so how would we know what the infants thought was back there? OK, well, we use what's known as habituation of looking time. Again, you sit the infant on a parent's lap, and you show them stuff, and you just measure how long they look. It's magnificently low-tech but really profound. OK, so what we're going to show here is how long the infant looks on each trial as a function of how many times you do it. So you show the infant this the first time, and they look for 40 seconds. That's a long time. You show them again, they look for 35 seconds, and so forth. And by the fifth or sixth time, the infant is bored. Like been there, done that, bored, right? OK, now they're bored. Now we have a moment to say, OK, what did you think it was? And so now, what you can ask is, what do they think-- you then show them either this or this, and you ask them which of those they're bored to, right? So the idea is if, when looking at this, they thought there was a continuous line behind the occluder, then they should be more bored by this. But if they thought that was two separate pieces, then they should be more bored by that. Does that makes sense? Because it's the same thing they're already bored with. I mean, it's not exactly the same. The occluder isn't there, right? But it's more similar. OK, so here's the data. Here's what they find. So what does that mean? What do the infants see when you show them this? It's right there in the data. Look at the first test trial here. This is the first test trial, when you show the complete line or the broken line. What do they see here? Yeah, they saw the complete one. That's why, when you present the complete one again, they're still bored-- already saw that. Make sense? So isn't that awesome? It's so low-tech and so simple, but this is how you can ask an infant, what do you see? Yeah? AUDIENCE: Why does it switch positions in the second trial? NANCY KANWISHER: You know, frankly, I never understand why infant and development people do a second and third trial. Seems to me by this point, the jig is up. I think it's just because it's hard to get enough infants, and you need more data, and so they do a second and third trial. But to me, that's the diagnostic one. And that's probably not a significant switch, but whatever's going on out there is obviously much less important than this. Heather, do you have a better answer than that? Why do they do those other trials? They always do, and it just seems like, what? [LAUGHS] AUDIENCE: I don't know. NANCY KANWISHER: Yeah, I don't either. AUDIENCE: [INAUDIBLE]? NANCY KANWISHER: Oh, you do it every which way, but you do it pretty fast. They get bored, and you don't want to wait half an hour and come back, right? I mean, you could do that. Then that would be a memory question, right? Yeah, Jimmy. AUDIENCE: Just curious, is this conserved between [INAUDIBLE] do they all see complete lines, where [INAUDIBLE]? NANCY KANWISHER: It's pretty robust. Well, OK, so first of all, these methods are awesome, that you can learn these deep things about perception in infants. But these data are noisy as hell. There's no error bars on this plot, but I bet if there were, you'd have to run a lot of infants to get to the point where you reach significance. Because a lot of times, the infants will just throw up, or they'll just do what-- they do all kinds of random things. So the data are extremely noisy, and it's very hard to get enough data with an infant to say anything about the difference between one infant and another. By the way, there's a very exciting development going on in this department right now, where Kim Scott, who's a former grad student of this department, has figured out how to do looking time experiments like this online, OK? And that's hugely important, because the number one bottleneck in this kind of developmental research has been finding enough infants, or getting enough data per infant. And so I think that she's going to just crack it wide open. Talia? AUDIENCE: I guess I'm a little bit confused how we know what the infant really saw based on how long it looked at something. Could it be that maybe they look at like-- maybe they look at the broken sticks longer, because it's like what they thought was behind it, so they're now excited that they get to see what's-- NANCY KANWISHER: Maybe, but then, why would you get this? So we know from this that the more familiar it looks, the less time they look. So you would have to come up with-- yeah, there's wiggle room in these data, but you'd have to come-- your account would have to say, why would they look less, and less, and less long when we repeat the exact same thing, right? And you could tell a story like, OK, it's a little bit different, because the occluder isn't there. But it's a little bit the same, and that's kind of edgy and fun. Or you could tell another story, but I think the bulk of the developmental literature shows that when you do this kind of stuff, it's a change that makes infants look more. I'm going to go on unless there are questions of clarification, just because there's so much other cool stuff. OK, so how can we use this to study face recognition? That was just a sidebar on the method. OK, so there's a lab in Italy where they have an infant psychology lab next to a maternity ward, and they've been doing all these awesome studies. OK, and they test 1-3-day-old infants. And so one of the things they did is show infants, just like the paradigm I just showed you. They show the infant the same face again, and again, and again. That's the habituation phase. And then, this is a slightly different one. You give them a choice of whether they-- actually, you don't give them a choice. I take it back. Yeah, you show this condition or that condition, and you see how long they look at each across different infants. And so this is the same person from a different viewpoint. Actually, pretty subtle, as we discussed with the Jenkins study way back. And that's a different person from that viewpoint. And what they found is that-- it's hard to see, but a very low P level means that there's a significant difference in how much the infants looked at those two. So that's pretty amazing. 1-3-day-old infants can apparently recognize the identity of a face, a novel individual they don't already know, with similar-looking faces, without hair, and across view changes. Wow, right? So that's pretty impressive. OK, and so then, they've done all kinds of other variants. If you have them rotate all the way from front profile, there's no longer a significant difference. Infants can't do that. And then they do all kinds of other variants. If you show them the same individual and then habituate to that, they can tell the difference between viewpoint. That's the same, and that's different, even though it's the same identity. So you can use this to test what they think is same or different, which is a deep question to ask. If you're interested in representations and cognition, the question of what an infant, or an animal, or a bunch of neurons thinks is the same or different is the essence of characterizing what it represents. Yeah, Quiley? AUDIENCE: [INAUDIBLE] the rotated face [INAUDIBLE]?? NANCY KANWISHER: Down here? Yeah. Yeah, they do. So here, basically, it's either identical, or it's different in some respect. So given a choice, when it's rotated anyway, the familiar one is more similar. But down here, this one is more similar in viewpoint. Yeah? AUDIENCE: And these are not like the [INAUDIBLE] in such [INAUDIBLE] the student, the [INAUDIBLE] NANCY KANWISHER: Sorry, say it again? They're not like-- AUDIENCE: The children have seen faces before this. NANCY KANWISHER: Well, as little as possible. As I say, I mean, they've seen some, but not very many, and they haven't seen these faces. So when you're trying to get those innateness questions, you go as close to birth as you can, but you can't usually go into the very moment of birth itself, right? And so there's usually some experience, and it's a challenge, but this is pretty early. Yeah? AUDIENCE: So couldn't that just meant that the face perception network is just like-- it develops really quickly, right after [INAUDIBLE].. NANCY KANWISHER: It could, it could. Based on these data alone, it could. That's considered kind of unlikely, but I agree that that's consistent with these data. In the first two days of life, the whole thing wires itself up. That's be pretty unusual. It's not really consistent with those samples of neurons that people have looked at elsewhere in the brain, but maybe there's a special little circuit that just wires itself up really fast. So not likely, but possible, OK? All right, now, you might say, well, maybe there's some kind of simple visual features that are short of an actual face representation here. This doesn't show us that this is something about faces per se, even though it can generalize across viewpoints. So it's not just pixel intensity, right? So what is the classic way we asked this question in face perception, where we ask, is this really something about faces, or is it something about the low-level perceptual properties of the face? AUDIENCE: Turn it upside down? NANCY KANWISHER: Yeah, turn it upside down. God's gift to the face researcher, right? So-- oh, I guess that was not on this slide. OK, right? OK, so now, in the next experiment, they present whole faces, or just the internal features without hair, or just the external features without hair. So the infants can do that at the top. They know those two are different. They can do this here, and they can do that there. OK, not too shocking yet. Just tells you any of those cues can support performance. But now, we can ask, is that just pattern-matching? No, it's not. Because when you turn them upside down, you find that only-- let's see, it's only performance in this case that suffers when you turn them upside-down, not this case or that case. OK, so that shows that there are a variety of cues here that infants could be using, but when you show them just the internal features-- the actual face proper-- that part, the ability to do this discrimination, goes away when you turn it upside down. So that part, at least, seems to be at least somewhat face-specific, or has the signature of face-specific processing. Make sense? OK, I mean, as a pattern, it'd be just as easy to recognize this upside-down and distinguish it from that upside-down, if it was just the pixels you were registering. But if you were doing face processing that's something like adult face processing, you'd expect that inversion effect. OK, all right, so where are we? And I should just say, even this is actively debated. In fact, the author of this study considers this not to be evidence that that processing is face-specific. I think she's got some of the strongest evidence ever, but she's got some counterargument about how in the inverted faces, they don't look as long in the situation phase. And so it's like I'm telling you these cool methods, but boy, every one of them can be fought over. OK, so where are? We've just shown that discrimination of individual identity is present in very young newborns, recognition across viewpoints, and inversion effects are all present within the first few days of life. OK, so newborns have very impressive face perception abilities, and that's particularly surprising given that their acuity is terrible, right? the vision is really blurry for young infants, so it's amazing that they can do these things. But now, there's room for quibbling about whether this is really a face-specific system. So the inversion effect is suggestive, but they haven't totally nailed the case about what's being tapped into here. Is it really face perception per se-- something specific to face perception-- or is it some more generic kind of object perception? OK, and further, we want to know what happens after that. OK, so you don't need to memorize this table. I'm just going to make a few simple points with it. There are lots and lots of studies where people have tested behaviorally all kinds of different aspects of face perception, and the basic story is that by age four, you see the little smiley face means that this adult-like property of the face perception system is present by age four. So all of those signatures of face perception that are present in adults are present by age four, OK? And in fact, much of the action is much before that. You can see that all of these things are present at the earliest age they've ever been tested. The little square means nobody's tested it at that age. So all this stuff is developing very fast, right? OK, one particularly important thing here that you read about a little bit, but that I want to take a moment to make sure you understand because it's so interesting and cool, is the phenomenon of perceptual narrowing, OK? And this happens in face perception, and it happens in phoneme perception in speech. And I'm going to do a demo here. So I'm going to show you a monkey face briefly. OK, it's going to come on in a second, and you just look at It. Here we go. Boom, there it is, OK? OK, in a moment, I'm going to show you another monkey face, and you're going to shout out same if you think it's the same, and different if you think it's different, and, huh, if you don't know. How many people don't know? Yeah, it's different, right? OK, well, OK, maybe that was too hard. Let's try it with a human, OK? Remember how hard that was? Now let's try it with a human face. I'm going to show you a human face. Everybody ready? Here we go. OK? OK, and I'm going to show you another human, and you're going to say, is it same or different? Here we go. Duh! Easy, right? OK, so here's the amazing thing. You were better at that monkey face task when you were six months old. You could do that monkey face task when you were six months old. One of the things that you have learned from experience is that you don't need that information, and you threw away your ability to do that, but you had it when you were six months old. Isn't that awesome and interesting? That's called perceptual narrowing. So the experiments, in particular, do the following. You use that preferential looking paradigm-- the preferential looking to the novel face in infants-- as your measure of discrimination ability. What can they discriminate? And so you show two human faces-- two different individuals, like this. And so now, what you see is that at six months, nine months, and adulthood, people preferentially look to the novel face more than the familiar face, OK? That's just what we've just done. People like to look at the new thing, not the old thing, OK? However, if we do six months, nine months-- oh, yeah, that's what we just said. OK, they can do that. So now, if you try this on monkey faces, you find that adults are like us. We're barely able to tell the familiar from the novel. We're not so good at monkey face discrimination. Nine-months-old are the same. But at six months, infants can discriminate the monkey faces, and you could, too, if somebody had asked you. So there's a very similar phenomena with phonemes. Those of you who are not native speakers of English maybe aware of some phonemes in English, if you learned it relatively late, that are hard for you to discriminate. There are sounds in Hindi-- I forget, it's like a "da" and a "ta," that sound identical to me, but that are just like completely obviously different to native Hindi speakers. And all languages have this. So of the kinds of phonemes that are discriminated in any language in the world, you could discriminate all of those when you were six months old. And one of the things you do when you learn a language is just throw together in the same bag things that are actually different that other people can discriminate if your language doesn't discriminate it, OK? And so you get that with phonemes, and you get it with faces. OK, everybody get what perceptual narrowing is? OK. OK, you also get this-- I mentioned this way back-- with perceiving faces of other races, right? Not just faces of other species, but if you grow up in an environment where you're only exposed to races A, B, and C, and you later have to discriminate faces of races D, E, and F, you're not so good at it, right? All the same deal. OK, all right. So how would we know whether this change between six months and older is just maturation-- it's just some kind of developmental program that's going on autopilot independent of what you see, or whether it's learned from experience? Josh? AUDIENCE: You control for experience. NANCY KANWISHER: You control for experience, absolutely, like the Sugita paper. OK, so we'll get to that in a second. So we started with these key questions-- what is the initial state at birth, and we showed impressive perceptual abilities within a few days, although people dispute whether those abilities are a face-specific system. And we don't know much about what that system is, other than it works surprisingly well given the low acuity. And we showed that how it changes after that, there's perceptual narrowing between six and 12 months, but a great deal is not known about what happens then. And so now, we're onto this question of how are we going to un-confound what changes after birth, whether it's maturation or experience. And I'm not going to have time to get to these other awesome methods. We're going to focus on controlled rearing, of what you read the Sugita paper. OK, so just to remind you of the basics, most of you seemed to get the paper just fine. The big idea was again, using this preferential looking method, what Sugita et al. Showed is that when they reared monkeys for six, 12, or 24 months without ever letting them see a face, and then tested them on the very first session that they ever saw faces with preferential looking, they found that on the very first exposure to faces, the monkeys looked more at faces compared to novel objects, right? They showed that face preference, sort of akin to infants looking at the paddle, and they discriminated between faces-- very similar faces-- with adult-like accuracy. And this part, I don't know if you found it surprising, but when this paper came out I, was like, whoa, that is crazy, right? Because as I said, the whole space of sensible hypotheses is, OK, maybe a lot of stuff is innate, but you're still going to need experience to tone it up, for God's sake, right? Who would think the entire adult ability could exist without any experience at all? So I don't know if you had that reaction, but I think that's a sensible reaction. It's a pretty astonishing finding in that paper. Unfortunately, there's one author on that paper. It was done once, and it's such a labor-intensive study that probably nobody will ever try to replicate it. So in the back of many people's minds is like, really? Can that really be true, or is there something funny here? So I hope somebody replicates it someday, but it hasn't been done yet. OK, the other thing that you guys presumably noticed is there was perceptual narrowing in that study. There were many interesting things in there. It's actually quite a rich paper. But after the initial testing session, no matter how long the deprivation, the monkeys were then housed in either an environment with just humans or just monkeys. And so whether that was 6, 12, or 24 months after birth of face deprivation, they then lost their ability, at that point, to discriminate the unexperienced faces, OK? So they went through perceptual narrowing. Does that all make sense to you guys? You got that? Good. OK, all right. So anyway, that suggests that an awful lot of the face perception system is present without any exposure to faces, and that's pretty astonishing. What experience seems to do there is not create abilities, but eliminate them right for the species that you don't see. OK, so first reaction is, really? Second reaction, is there any way to account for this in terms of some non-face-specific system? I think you can, but it takes some work, and the counter-explanations are really difficult. You can say, well, maybe this is all being carried by some more generic object system. They didn't test inverted faces, unfortunately, but if it was carried by a generic object system, why would you find the perceptual narrowing? Why would they have lost their ability for the unexperienced species? So I think that story is hard to tell. And, of course, the other question I'm sure you guys are wondering is, what is going on in those monkeys' brains? Yeah, OK, so let's get to that. Let's talk about what we know about development of this system by looking at brains. And first of all, there's been lots of work on this in older kids, age 5 and up, going back over a decade. And it's now clear that all of that basic machinery I showed you is present by age five, in most kids age five. It's continuing to change after that, but you can detect most of that stuff by age five, or six, or seven-- something like that. OK, trouble is, that's cool, but age five is late with respect to experience and with respect to all those behavioral abilities that I showed you. So we need to go earlier. And so a couple of years ago, Rebecca Sachs-- who's straight up there, two floors up-- started scanning infants, OK? And this is-- as Heather can tell you-- almost impossible. It is right on the edge. It took Rebecca and her lab many, many years of work over five years just to get the system going. There were all kinds of technical advances, like making scanning coils that were optimized for infants and comfortable for infants. Rebecca herself went to great lengths, including producing some of her own subjects. That's her son Arthur there and her two-- her grad student and postdoc who were working with her. But all of this massive effort was worth it, because what they found was, first, for comparison, this is adults with a contrast of faces versus scenes, OK? So this is basically the PPA in blue responding more to scenes, and the FFA in here and some other face-selective bits responding more to faces in adults. What do you see in six-month-old infants? It's astonishingly similar, right? You can really see a very similar layout of the functional organization of the brain already by six months. So that's a huge advance. That pushes way back the timeline by which these things had developed. Previously, everybody is talking about, oh, what changes after age five? Age five, come on? OK, it's mostly there by age six. OK, now, importantly, these systems are not adult-like. Their selectivities are very different. Those regions are less selective in infants than they are in adults. But the spatial layout is there already by six months, and that, importantly, constrains-- whatever our model is of development that pushes it way back. OK, so now, the next questions are, what is it about that region-- or those particular regions-- that makes them become face-specific already by six months? How does the face system know to take up residence in that systematic location in the brain, and what is the role of experience in their construction? And how could we ever answer this? One way to answer that is to use an animal model, OK? So there's been-- yes. AUDIENCE: OK, yeah, similar question about-- NANCY KANWISHER: I'm sorry, I didn't hear. About what? AUDIENCE: General physical layout-- like why does your stomach always come in the same place, and would it maybe be the same mechanism that guides development of any organs and the layout of the body, [INAUDIBLE]? NANCY KANWISHER: Yes. Now, I don't know much about how hearts, and kidneys, and livers develop, but my understanding is that's pretty much wired in. There's some chunks of DNA that tell you how to build a kidney and where to put it in your body, right? And so that is one of the hypotheses here. It's a tempting hypothesis, right? There's all that structure. It's a very tempting hypothesis, but that doesn't mean it's necessarily right. Yeah, it absolutely is. It's a hypothesis we should consider and take seriously, yeah. OK, so but we want data. We want to find out. OK, so animal models. So starting a few years ago, Marge Livingstone over at Harvard Med School over there-- a couple of miles over there-- started doing these also really amazingly heroic studies where she was scanning infant monkeys. OK, now, this is really hard to read, so let me tell you what we got here. We have the cortex. This is all the same animal at different time points, and each of these things is the cortex unfolded mathematically and flattened so you can see the whole thing. I don't expect you to know what's where. I can barely tell myself. But if you look at it, what you see is at 81 days of age, there's just blue stuff. There's no orange stuff. The orange stuff is the face-selective response. In fact, if you look down, you start to see, oh, that looks-- yeah, yeah, OK, that looks pretty systematic. It starts replicating after that. And so the claim is you don't see face selectivity until about 170 days after birth in monkeys. OK, that's about here. Here's another monkey for comparison. If you stare at it, you'll see, OK, there's these systematic bits-- boom, boom, boom, boom-- and maybe a little hint at 170, but-- there's some garbage up there, but nothing systematic before that. Yeah? AUDIENCE: So there's no control of the environment? This is like monkeys-- NANCY KANWISHER: Normal monkeys who have exposure to human faces and monkey faces hanging out in the lab, yeah. We haven't gotten to control rearing yet. It's coming. OK, first thing is just, when does it develop in monkeys? OK, all right. So are you surprised by this? It's not there here, and it is there there. You should be surprised. Why are you surprised? This is what you guys predicted. Quiley? AUDIENCE: I guess I'm surprised because they were able to discriminate. NANCY KANWISHER: Yeah, what is up with that? Absolutely! The Sugida paper really made it look like that system was innate, right? No experience-- boom! They're fine. It was just behavior, but it was a good behavioral study. So why the hell isn't it here? Everybody with the program on how surprising that is? OK, so a bunch of things. First of all-- and it gets stable after that, and replicable. Well, the first thing is one's a behavioral measure, and one's a neural measure. Maybe those fabulous behavioral measures weren't actually being driven by some face-specific system. Wouldn't that be sad, right? I mean, they did lots of controls. It was a nice idea. I thought they did as well as they could, but who knows? Maybe those monkeys could do that task with some other system and they didn't need their face system for it. That's one possibility, right? Then, you could have the face system not develop till later, but the monkeys could do it before. But the other thing is, notice that Sugita didn't test their monkeys until, with the youngest ones, six months of age. So maybe it just got wired up just before-- right there-- they were tested, OK? So it seemed contradictory at first, but it's not completely, literally contradictory, yeah? OK, all right, so now, the fact that this stuff doesn't show up until here, does that mean that this face system requires experience to develop? You know the answer, because whenever I ask that question, the answer is always no. Why does that not imply that you need experience with faces to wire up? It's tempting. You look at it, and it's like, OK, you had to look at faces all this time before you wired it up. Boom, there it is-- very tempting. But-- is it Jessica, no? Sorry, what's your name? Yeah. AUDIENCE: Bele. NANCY KANWISHER: Bele. Oh, sorry, you told me that like six times. AUDIENCE: I could be merely due to mature, physical. NANCY KANWISHER: Yeah, it could be just maturation. I keep making the same point, because it's important, right? Just because it shows up later doesn't mean it's learned, right? Maybe it's like puberty, or height, or something like that that's on some developmental program that's just going to unfold independent of what you see, OK? So how would we find out? We would do controlled rearing. And that's exactly what these guys did, OK? So in another paper that just came out a couple of years ago, they raised baby monkeys without ever letting them see a face. Much like Sugita did, they use welder's masks every time they were in the lab, so the monkeys never got to see faces. And like Sugita, they went to lengths to treat the monkeys nicely. They heard the calls of their com-specifics, they got lots of attention, they had rich visual experience. They just didn't see faces. So it sounds kind of tragic and horrible at first, but it's actually not that bad. They had social contact and visual experience. They just never saw faces-- both this study and the Sugita study. All right, OK, so they could hear and smell other monkeys. So the face-deprived monkeys saw no faces at all until 90 days old. And at that point, they went straight into the scanner, OK? And the first time they saw faces was inside an MRI machine getting scanned, OK? So what do you think? Are the face-deprived monkeys going to show face patches? So there's no way to tell, because we have all these contradictory bits of evidence here, right? From Sugita, you might think yes. Hard to tell. So let's just look at the data. OK So here first is a normally normally reared monkey 260 days old just for comparison. And those face patches in yellow in two different monkeys here, B4 and B5, left and right hemisphere. OK, so those yellow bits are the face patches. OK, normal 260-day-old monkey. Now we're going to see a face-deprived monkey, 260 days old. This monkey was face-deprived that entire time up until scanning. No face patches. The plot thickens-- no face patches at all. So these guys published this paper in a very high-profile journal and said-- this is the title of paper-- "Seeing faces is necessary for face-domain formation," OK? Face domain just means face-selective patch. OK, everybody see? You deprive them of face experience, you don't see it. OK, that's pretty interesting, and it strongly suggests that the face system is not innate but depends on face experience, doesn't it? Rare case where the answer is, yes, it does. And it feels like it contradicts the Sugita finding, right? But not exactly. You could still wiggle out of it, right? You could say, OK, the thing that Sugita was studying doesn't use those patches, so it's not flat out contradictory. Sugita was measuring behavior; these guys are looking at brains. So it's kind of unsatisfying, but it's, in principle, possible. Me and everyone else has been nudging these guys to run the Sugita behavioral experiment on your monkeys, please! And I gather that's getting going, but I haven't seen any of the data yet. So we don't know how that's going to resolve. OK, so let's take stock. What is the initial state? We show with behavior that there is both attention to faces and-- present in newborn humans, and face specificity seems like it, but it's not totally nailed, whereas functional MRI says there's no evidence for face specificity at birth-- at least in monkeys, right? That's other. Yeah, OK, so how are we going to reconcile this with all the behavioral results I showed you, that there seems to be a lot of face abilities present in newborns? Well, one possibility is that face specificity exists behaviorally, but MRI fails-- oh, sorry, face specificity exists in the brain, but MRI fails to detect it. There's a whole rigmarole about whether functional MRI works well in infants. It's barely possible, as I mentioned. It's also hard with infant monkeys. Their blood flow regulation is different. They're squirming and wiggling. There are a million issues with scanning babies, whether human or monkey. And so you could always say, well, it was there, and just the MRI data are just kind of crappy, or blood flow regulation to the brain develops later-- an argument many people have made. However, a paper was published last week that argues against that hypothesis. The same group just showed that the somatosensory touch system is totally in place by 11 days in baby monkeys. So that suggests that you can get really nice functional MRI data at 11 days of age in baby monkeys, and it makes it less likely that this is some kind of spurious failure to detect something that was actually there. I'm not going to test you on every little detail here. I want you to think about the logic of how you can ask these questions. OK, the other possibility is that the face abilities that we showed behaviorally are using some more generic object recognition system, not using this face-selective system in the brain. OK, so how does it change over time? Well, we showed that behaviorally-- in humans, at least-- all the hallmarks of face-specific processing or present by age four, and we get this perceptual narrowing between six and 12 months. But then we showed that with functional MRI-- at least in monkeys-- there's no evidence for face specificity before 200 days, right? AUDIENCE: [INAUDIBLE]? NANCY KANWISHER: I gather they're working on it, but I haven't seen any of the data yet, yeah. OK, so that lack of face specificity is consistent with the idea that all that human early face recognition behavior is driven by a different system-- because they don't have their face system yet, presumably. But it's also consistent with this idea that it's just failing to be detected. Even though I said that's probably not true, given you can detect other stuff, it might be true here. The ability to see things with MRI depends where you're looking at the brain. OK, so what about these causal roles of structured experience and biological maturation? OK, so we argued that early face experience isn't crucial for the face recognition system. That was the Sugita paper you read. But now, functional MRI is showing that face experience is necessary for the development of face patches, at least in monkeys. And so a very sensible reaction, is what, what, what? How are we going to make sense of this? This is a big conundrum. It's going to get worse on Monday, where there's yet more contradictory data. And further, if that face system isn't innate, then what, if anything, is innate about face perception, right? So maybe what all these data are telling us is, not that much. Maybe just a biased look at faces, or some very simple image template that's sufficient in the environment of infants to get them to look at faces. So there's a lot of studies I didn't have time to work into this lecture, where people stick cameras on the foreheads of newborns, and they collect, what is the typical visual experience of a newborn? And then, you can take that-- you can take that experience and ask, what kind of-- you can write machine learning code to say, what would you have to build in to reliably pick out the faces in typical infant input? And it's probably not that complicated, because infants don't see that many different kinds of things, right? OK. We showed early visual discrimination abilities of faces in newborn infants. But again, it's not clear that's part of the face-specific system. And we showed that the face patches-- at least in monkeys-- seem to require experience, OK? I'm just recapping here. But now, there's this big question of, how do those face patches know where to develop in the brain? Like here they are in humans, these little purple blobs. The occipital face area I've got two different fusiform face areas, because various people think there's two. I'm not sure. I don't really care; doesn't matter. Anyway, how do they know to land right there? OK, we keep bringing up this question and dancing around it, but so far, I've given no basis for thinking about this. One possibility is that infants-- monkey and humans-- are born with some earlier kind of selectivity of that patch of brain. It's not a whole face template. It's not a whole face system. Maybe it's a bias for curvy things, right? And then, somehow, that makes the faces land there, and the system wires itself up. It's not exactly clear how that would go. But that's one kind of story. Another story is based on this fact I told you at the beginning of the lecture, which is most of the long-range connectivity of the brain is present at birth. And so maybe the particular connections of that patch of brain are already there at birth, and maybe that patch of connections are sufficient to somehow gate the input to that system and arrange for it to end up being face-specific, OK? So this is a very active area of investigation, and there's other very active, ongoing kinds of investigation where people are trying to understand how this development might work. One way people are looking at this-- I mentioned this briefly, but I think it's super exciting-- is people are asking with deep nets and other kinds of modeling, what do you have to build into a system to get it to produce face recognition abilities? If you're trying to make a deep net, you're trying to make it really good at face recognition, do you need to give it a template of faces? Do you need to give it only experience with faces? What do you need to build into it to get it to be really good, right? And so that's a very active area of investigation. And you can actually-- with some ongoing work with Jim DiCarlo's lab, we're asking, OK, deep nets don't have topography. Next door units in a deep net doesn't mean anything, what's next door versus far apart. Location doesn't mean anything in a deep net, but you can make it mean something. And then you can ask when, and whether, and how, and why you get face patches in a deep net and what computational role they serve. Well, totally weirdly, I'm finishing early, but I'm not going to finish. I'll take questions, and then I'll maybe add a little bit more. I think that was all I had here, right. Any questions about all this? If it feels a little bit chaotic-- I've sort of said x and not x, and x, although they're not exactly x and not x. They're just-- yeah, Sirdul. AUDIENCE: So the fMRI tends to [INAUDIBLE] activity in boxes, right? [INAUDIBLE] you said contain millions of neurons. So is it possible that the neurons that are specific to faces are distributed at an early age throughout the brain, and somehow the function for them-- NANCY KANWISHER: They get spatially clustered. AUDIENCE: Yeah, but the neurons themselves already exist at birth? NANCY KANWISHER: Absolutely. That's a great hypothesis. It's absolutely possible. Everybody get the idea? You have all those face neurons at birth, and maybe they're face-specific at birth, but they're spatially spread out. And then they have to find each other and hang out together next to each other before you ever get an MRI signal. It's totally possible logically. It seems to be quite unlikely actually, because it would be very hard for all those neurons, with their necessary connections-- which is, after all, how they become face-specific, is what their inputs are and what they're connected to-- it'd be very hard for them to migrate spatially across the brain maintaining their connections. Yes, you're going to push back? Go for it. AUDIENCE: Well, I think [INAUDIBLE] But since you said [INAUDIBLE],, they care about what their neighbors are doing. So maybe it's just like a neighboring neuron's properties, but the [INAUDIBLE] in this chain moves it back until that brief [INAUDIBLE].. But that progression is the most efficient way to pop up. NANCY KANWISHER: It's totally possible, totally possible, absolutely. Yep, other questions? And this is wide open. Nobody knows, right? Let me just see what else I have time for briefly. So funny, I took out all these slides because I just thought I'm not going to run out of time, and go over, and drive everyone crazy. I moved all this stuff to the other lecture. Maybe I will just-- All right, hang on, let me just glance at the lineup for Wednesday. Yeah? AUDIENCE: Is there-- the perceptual narrowing is really surprising and fascinating. Does anybody have a model for how that processing might work or what it might be for? I mean, it feels like a lot of it-- assumptions, or the common sense assumptions when we look at fMRI, and when we look at neural signals is that they all mean positive things. But maybe a lot of that signals, a lot of activity might be inhibitory-- might be the opposite. NANCY KANWISHER: Totally, yeah. But how would that explain perceptual narrowing? AUDIENCE: Well, if what you're learning is what to ignore, then maybe it takes a lot of effort to ignore things. And not really sure. I'm not sure exactly, yeah. NANCY KANWISHER: No, it's a good point. Like I mentioned at the beginning, one of the limitations of functional MRI is we don't know what the actual neurophysiological basis of the bold signal is. It could be anything that increases your metabolic costs, and hence changes blood flow. But one of the things that increases metabolic costs is inhibiting other neurons. And so way back in the early days of, actually, PET imaging, before functional MRI came along, there was an early proto version of a face-specific paper. It didn't nail everything, but it was not bad for 1981, when I think it was published. And the person who did that paper, Justine Sergent argued that it's very, very ambiguous what it means to find a hotspot in the brain where the activity-- the metabolic activity-- is higher, say, when you look at faces than objects. And her point was, that could be the part of the brain that really sucks at face recognition. That's the part that's going, ah, I can't deal with this thing! What is this thing, right! That's really bad at it, and the neurons are firing a lot. It's sort of facetious, but sort of not. And it's probably not the right account, but it is an important reminder that we actually don't know what actual kind of neural activity is driving those things and whether it's excitatory or inhibitory, absolutely. Hang on one second. I feel like there was another part of what you said that I was going to engage on. AUDIENCE: No, It feels like somehow, possibly, connected to the perception [INAUDIBLE].. NANCY KANWISHER: Yeah. Yeah, possibly. We'd have to work it out. AUDIENCE: In one of the lectures [INAUDIBLE],, NANCY KANWISHER: Yeah. AUDIENCE: And then, [INAUDIBLE] NANCY KANWISHER: Yes. AUDIENCE: [INAUDIBLE] NANCY KANWISHER: Yes. AUDIENCE: Then, I'm a bit confused, because, like, you said before, almost like all the wiring is [INAUDIBLE]. NANCY KANWISHER: OK, long-range wiring. AUDIENCE: Oh. NANCY KANWISHER: OK? Which is very different than all the circuits that live in each little patch of cortex. Remember, I showed you this big change in the complexity of neurons and the number of connections. Oops, looks like we've lost it now. So they're changing a lot within each patch of cortex, right? So those local circuits that are doing computations are surely changing a lot over the first couple of years. It's just the long-range connections between that patch and some remote region-- where it gets its inputs and where it sends its outputs to. But hang on a second. You asked something-- there's also very interesting stuff about the other race effect. I did mention that a month ago or so, didn't I? Which is another version of this perceptual narrowing. And in fact, a friend of mine who's a great face researcher has not yet published this paper, but she found the following. Totally, that's right-- you mentioned the adoption studies. So what she has done is ask-- did I tell you guys about this already? I feel like I did, but maybe not. Anyway, so what you find is that people are-- they all look alike. Whoever they are, if you've seen fewer of them than whoever we are, you are less good at discriminating them. That's just what it is. But so Elinor McKone asked if there's a developmental timeline for getting your way out of the other race effect. And so what she did was-- she's in Australia, and she got various communities of people who move from dominant racial composition x to dominant racial composition y and who made that move at different ages. And so what she finds is that, actually, much like learning the phonemes of a language-- which, even if you-- hey, let me back up a second. I said that with phonemes, you can discriminate all those phonemes of all the world's languages at birth, and by six months, you've thrown away the abilities for all the phonemes you can't discriminate. However, if you then go learn a foreign language sometime between six months and, say, 12, you can become a native speaker. So you can learn them back, right? So there's another window-- it gets narrowed-- but you still have a window to learn them back, OK? After you're like 12, 15, whatever, forget it. You won't be a native speaker, right? Same deal with the other race effect. This is exactly what McKone found with the other race effect. People who moved to a different dominant racial community learned the ability to natively discriminate people in that other race if they moved before age 12. So it really seems like there's some general ability. Oh, I remember David's other question. Why does this make sense? I don't know exactly why it makes sense, but certainly, neural activity is expensive metabolically, and we don't want to make discriminations we don't have to. And so it can be just that the nervous system is learning what kinds of discriminations it needs to make in its environment and what kind it doesn't, right? And with the case of phonemes, it's actually part of what you're doing in speech perception, is you want to know, every time I say "ba," it sounds different in all different contexts. And so part of the essence of the difficulty in speech recognition is understanding that all those different "ba"s are the same sound, right? And so part of what perceptual narrowing might be doing is saying all those things-- "da," "ta," whatever it is in Hindi-- those are all going to count as the same thing. And that's going to help you process speech in your native language but hinder when you try to learn a foreign language. Yeah? AUDIENCE: So something I'm wondering with perceptual narrowing is how general like the starting point is. So I'm basically wondering-- because in the studies, they compared human and monkey faces. NANCY KANWISHER: Yeah. AUDIENCE: And I'm wondering if there's any correlation with how similar the DNA, like how they're able to discriminate between the faces. So whether that's different types of monkeys, or different animals-- NANCY KANWISHER: I'm not getting it, right? Early on, you can discriminate both, right? So what's the question? AUDIENCE: So I'm wondering what other animals can they discriminate, and what-- NANCY KANWISHER: I see, I see. How far does it go? Yeah, good question. I don't know that anybody has asked little kids if they can discriminate other kinds of faces other than monkey faces. I'm sure there's some limit to it-- like fish faces? Probably, I don't know, yeah. But there's also, actually, in terms of that extended-- I don't know the answer to that, yeah. There's going to be some limit. But in terms of the question of how long can you relearn those abilities or maintain them, it's not like perceptual narrowing is going to happen at six months automatically. So if you manipulate it-- so the studies on humans, if you send-- I feel like I said this in here before, but it must have been somewhere else-- if you send parents home with books with monkey pictures in them, parents of six-month-olds, and you say, look, every night, go through the book with your kids and say, there's Monkey Joe, and there's Monkey Bob, and there's Monkey Whoever with your kids, and you have them do that from age six months to 12 months, they don't perceptually narrow, because they continue to get that experience, right? Interestingly, if the parents go home and just say, look, look, that doesn't do it. You have to give them some social cue that is essentially saying, this thing is different from that thing. And if you do that with an infant, even when they don't really understand language much, they get that cue, and they learn to discriminate-- or they maintain their ability to discriminate monkey faces. Yeah? AUDIENCE: Does that hold up even when they're past the 12 months old? NANCY KANWISHER: Well, I'm guessing it will be just like the case that McKone showed with other race effects, right? I'm guessing the other species effect will be like the other race effect in that if you, say, start working in a monkey lab when you're eight years old-- that would be weird, but you could-- or you-- I don't know, whatever. Anyway, that you would be able to relearn it on the same time scale that you would relearn-- relearn, or learn for the first time, previously unfamiliar races of faces. But maybe those are slightly different timelines. Yeah? AUDIENCE: Could you do something similar with the monkey faces, but with phonemes in different languages? NANCY KANWISHER: I'm sure you can, and I'm sure that has been done, but I don't know that literature. Yeah. Yeah, you mean like keep-- well, OK. I mean, it essentially does get done, right? So kids who stay in environments-- let me think about this. Well, certainly, an infant who's being raised in a bilingual environment will maintain their ability to discriminate those phonemes from any of the languages they hear, right? AUDIENCE: So you're saying, with the monkeys thing, some kind of social cue to know that-- NANCY KANWISHER: I suspect that's true. I don't know this literature well enough. I do know-- yeah, actually, it's coming back dimly. Heather, do you know this? Janet Werker AUDIENCE: [INAUDIBLE]. NANCY KANWISHER: OK, so Janet Werker is this amazing infant phoneme perception researcher. And I'm pretty sure that if you present infants with just like a TV in the background with a foreign language, even if the infant doesn't have much else to do, that's not enough. And you need to look at them, and engage with them, and speak mother-ese-- like, hey, infant, blah, blah, right? I think you need to do all of that for them to maintain it, but I'm-- AUDIENCE: Yeah, that's correct. I think there also has to be interaction. They can't also just be watching the [INAUDIBLE].. It has to be slightly [INAUDIBLE] reciprocal [INAUDIBLE]. AUDIENCE: And the fact that [INAUDIBLE].. NANCY KANWISHER: Correct, yeah. AUDIENCE: So even if it's not just [INAUDIBLE],, it has to be [INAUDIBLE]. NANCY KANWISHER: It has to be what? AUDIENCE: It has to be like [INAUDIBLE].. It can't be [INAUDIBLE]. AUDIENCE: Yeah, which makes me of [INAUDIBLE] or something-- like if you interact in different ways, [INAUDIBLE].. NANCY KANWISHER: Cool, yeah? AUDIENCE: Yeah, I have a question about how long that [INAUDIBLE] lasts. If someone spoke a foreign language when they were younger, then moved somewhere else or were adopted and then stopped speaking the language, [INAUDIBLE],, could they sort of be [INAUDIBLE]?? NANCY KANWISHER: I don't know. I'm sure there's a literature on that. You don't know that, Dana, do you? Sorry, like so you're raised bilingual, and then you stop having the experience early on from your second language, and then you're re-exposed later at age eight? AUDIENCE: [INAUDIBLE]. AUDIENCE: Yeah, you still have the-- yeah, you maintain the [INAUDIBLE].. AUDIENCE: Yeah, like after-- NANCY KANWISHER: Well, but wait-- AUDIENCE: But you're not able to speak the language, right? AUDIENCE: Yeah. AUDIENCE: But you still [INAUDIBLE].. AUDIENCE: But I guess you-- NANCY KANWISHER: But then, that's not consistent with perceptual narrowing. AUDIENCE: If you're exposed to it before two years? AUDIENCE: Yeah. NANCY KANWISHER: Yeah. AUDIENCE: And then you move away? NANCY KANWISHER: Well, if it goes beyond that six-month thing, yeah, OK. AUDIENCE: I think that's the case, yeah. You might not have the higher structure, but if you like you the syntax and some vocabulary, you'll have a better accent than someone who did not have that early experience, might not be able to differentiate [INAUDIBLE].. But-- AUDIENCE: You just [INAUDIBLE]. AUDIENCE: [INAUDIBLE],, I think that's correct. NANCY KANWISHER: OK, good. One more question. Josh? AUDIENCE: So do we know of cases where there's [INAUDIBLE] a mismatch between [INAUDIBLE] sort of information? Like-- NANCY KANWISHER: Like this? AUDIENCE: Yeah, like-- with this property in some of the domain of some of the [INAUDIBLE].. Basically be [INAUDIBLE]. NANCY KANWISHER: Oh, god, I don't have my dictionary of knowledge filed that way so I can pull up an instance of that, but I'm sure there are loads of those. AUDIENCE: [INAUDIBLE]. NANCY KANWISHER: Yeah, well, because when we-- because we're making all these assumptions about which behavioral ability is subserved by some particular activation in the brain. And mostly, we don't know, right? We know when we have the rare opportunities to do causal tests. We have a better idea that that system is at least causally involved in that behavioral ability. But yeah, often, those links are much looser than we'd like. All right, see you guys Wednesday. |
MIT_913_The_Human_Brain_Spring_2019 | 4_Cognitive_Neuroscience_Methods_I.txt | all right let's get started so today we're going to talk at some length about what i mean by this idea of mars computational theory level of analysis it's a way of asking questions about mind and brain and we're going to talk about that in the case of color vision and that's going to take a while we'll go down and do the demo we'll come back and talk about color vision and how we think about it at the level of computational theory and why that matters for mind and brain and then we're going to start in the second half a whole session on which is going to roll into next class on the methods we can use in cognitive neuroscience to understand the human brain and we'll illustrate those with a case of face perception and we'll talk about computational theory light very briefly a face perception what you can learn from behavioral studies and what you can learn from functional mri and then we'll go on and do other methods next time everybody with the program all right so to back up a little the biggest theme addressed in this course the big question we're trying to understand in this field is how does the brain give rise to the mind okay that's really what we're in it for that's why there's lots of cognitive science we're trying to understand how the mind emerges from this physical object and so for the last few lectures you've been learning some stuff about the physical basis of the brain what it actually looks like some of you guys got to touch it i hope you thought that was half as awesome as i did and we got a sense of the basic physicality of the brain and some of its major parts but now the agenda is how are we going to explain how this physical object gives rise to something like the mind and the first problem you encounter is what is a mind anyway i drew it as a weird big amorphous cloud because it's just not obvious how you think about minds right it feels like one of those things like could you even have a science of the mind like what is mine it's all kind of nervous making right um and so our field of cognitive science over the last few decades has come up with this framework for how we can think about minds and this isn't even a theory it's more meta than that it's a framework for thinking about what a mind is and the framework is the idea that the mind is a set of computations that extract representations okay now that's pretty abstract you can think of a representation in your mind as anything from a percept like i see i see motion right now or i see color and as you learned before you might see motion even if there uh isn't actually motion in the stimulus but that representation of motion in your head that percept that's that's a kind of mental representation or if you're thinking you know why is nancy going through this really basic stuff she's insulting our intelligence if you know something like that is going on in the background as i'm lecturing that's a thought that's a mental representation of a sort or if you're thinking oh my god it's after 11 and i'm not going to get to eat until 12 30 i'm going to starve you know whatever thoughts are going through your head those are mental representations too right and so the question is here how do we think about those and so this idea that mental mental processes our computations and mental contents are representations implies that ideally in the long run if we really understood minds we'd be able to write the code to do everything that minds do right and that code would work in some sense in the same way now that's a tall order mostly we can't do that yet like not even close a few little cases in perception kind of sort of maybe but mostly we can't do that yet but that's the goal that's the aspiration and so question is how do we even get off the ground trying to launch this enterprise of coming up with an actual precise computational theory of what minds do and the first step to that is by thinking about what is computed and why and so that is the crux of david mar's big idea right the brief reading assignment that i gave you guys that from mar and he's talking about how do we think about minds and brains step number one what is computed and why so we're going to focus on that for a bit here and let's take vision for example you start with a world out there that sends light into your eyes that's my icon of a retina that blue thing in the back the back of your eyes sends an image onto your eye and then some magic happens and then you know what you're looking at okay so that's what we're trying to understand what goes on in there in a sense what is the code that goes on in here that they takes this as an input and delivers that as an output okay more specifically we can ask as we did in the last couple of lectures let's take the case of visual motion so suppose you're seeing a display like this like something in front of you somebody jumps on a beach like that and there's visual motion information what are the kinds of things so that's your input what are the kinds of outputs you might get from that well to understand that we need to know what is computed and why so what is computed well lots of things you might see the presence of motion you might see the presence of a person actually you can detect people just from their pattern of motion if we should have done this at that demo write me a note to think about that next time if we stuck little tiny leds on each of my joints and we're in a totally black room and i jumped around and all you could see was those dots moving you would see that it was a person it would be trivially obvious so motion can give you lots of information aside from something's moving and what direction is it moving you can see someone's jumping that also comes from the information about motion you can infer something about the health of this person or even their mood so there's a huge range of kinds of information we glean from even a pretty simple stimulus attribute like motion and so we're going to understand how do we perceive motion we first need to get organized about what's the input and which of those outputs are we talking about and probably the code that goes on in between in your head or in a computer program if you ever figured out how to do that will be quite different for each of those things but that's the way you need to be thinking about mines okay what are the inputs what are the outputs and then as soon as you pose that challenge like okay let's say it's just moving dots and you're trying to tell if that's a person think about what is the code you'd write just these moving dots how the hell are you going to go from that to detecting if those dots are on the joints of a person who's moving around versus on something else that's how you think what are the computational challenges involved okay and i'm not going to ever ask you guys to write that code we're just going to consider it as a thought enterprise to kind of see what the problem is that the brain is facing that it's solving okay and so mars big idea is this whole business of thinking about what is computed and why what the inputs and outputs are and what the computational challenges are getting from those inputs to those outputs that all of that is a prerequisite for thinking about minds or brains okay we can't understand what brains are doing until we first think about this that's why i'm carrying on about this at some length and mar writes so beautifully that i'm just going to read some of my favorite paragraphs because um paraphrasing beautiful prose is a sin so mars says trying to understand perception by studying only neurons is like trying to understand bird flight by studying only feathers it just can't be done to understand bird flight you need to understand aerodynamics only then can one make sense of the structure of feathers and the shape of wings similarly you can't reach an understanding of why neurons in the visual system behave the way they do just by studying their anatomy and physiology okay you have to understand the problem that's being solved okay further he says the nature of the computations that underlie perception depends more on the computational problems that have to be solved than on the particular hardware in which their solutions are implemented so he's basically saying we could have a theory of any aspect of perception that would be essentially the same theory whether you write it in code and put it in a computer or whether it's being implemented in a brain yeah mar was many things he was a he was a visionary a visionary who studied vision a truly brilliant guy with a very strong engineering background and this is you know now pervading the whole field of cognitive science that people take an engineering approach to understanding minds and brains to try to really understand how they work okay so to better understand this we're going to now consider the case of color vision and so in this case we start with color in the world that sends images onto the back of your retina some magic happens and we get a bunch of information out so the question we're going to consider is what do we use color for okay and we're going to use the same strategy we used in the edgerton center of trying to understand some of the things that that we use color for by experiencing perception without color okay what are the outputs okay so to do that we're going to head over right now to the imaging center and we're going to have a cool demo by rosalesa so if it's going to be faster to leave your stuff here i don't know maybe we should yeah yeah we'll lock the room okay um yeah how long are we going to be today 10 minutes something like that and i need everyone to boogie because there's a lot of stuff i want to get through today so let's go all right so what do we use color for when we have it it's not a trick question supposed to be really obvious now yeah what's your name chardon hi uh do you think it was booty yeah yeah choosing which what else related to that but different yeah yeah yeah like what what did you notice that you could identify better but besides identifying and choosing what else much more generally bringing things into our awareness like with the reds in particular the strawberries yeah like do you find them easier to find no much harder oh yeah right harder without the light exactly what else yeah like driving like you need to have color to know the traffic lights totally totally that's a modern invention but a really important one what else are we in general are we very general or like whatever what do we use color for i mean we use it to like figure out like what's what to eat because it's like one of the strawberries isn't actually a strawberry so yeah i use color too uh-huh and the bananas did anybody notice sometimes it's hard to tell yeah point in the bag say more totally did you feel like people's faces looked a little sickly absolutely absolutely okay so this is just to show you that a lot of a lot of computational theory starts with sort of common sense of just reasoning what do we use this stuff for it helps to not have it just to reveal what we use it for but you guys have just reinvented the the key insights and early field of color vision okay so standard story is to find fruit like if you ask yourself how many berries are here take a moment get a mental tally how many berries okay ready now how many berries okay you see more and in fact there's a long literature showing that um primates who have uh three cone colors we're not going to go through all the physiological basis of cones and stuff like that but they have richer color vision because the number of different color receptors in their retina they're better at finding berries and in fact a paper came out a couple years ago where they studied wild macaques on an island off of puerto rico called cayo santiago and the macaques there have a natural variation genetically where some of them have two photo two color photoreceptors instead of three okay and in fact they followed them around and the monkeys that have three photoreceptors types are better at finding fruit than the ones that have only two okay so that story that's just been a story for a long time turns out it's true and also as you guys have already said to not just find things but identify properties like you can probably tell whether you'd want to eat those bananas on the bottom maybe not it's hard to tell on the top which ones you'd like and yet that's all you need to know okay so these are just a few of the ways that we use color and why it's important but there is a very big problem now that we try to think figure out okay what is the code that goes between the wavelength of light hitting your retina and trying to figure out what color is that thing so here's the problem we want to determine a property of the object of its surface properties its color right that's a material property of that thing but all we have so here's a thing we'll call that reflectance it's a property of wavelength but you can think of it as for now just a single number it's a property of that surface but all we have is the light coming from that object to our eyes that's called luminance i'm not going to test you in these particular words but you should get the idea okay so that's what we have that's our input but here's the problem the light that's coming off the object is a function not just of the object but of the nature of the light that's shining on the object that's called the illuminate so the problem is we have this equation this light coming from the object to our eyes is the product of the properties of the surface and the incident light okay and our problem is we have to solve for l i'm sorry we have to solve for r the property of the object given l what is r that's a problem okay that's kind of like if i said a times b is 48 please solve for a and b okay that's known in the field as an ill-posed or underdetermined problem we don't have enough information to uniquely solve this okay that's a very very deep problem in perception and a lot of cognition we are often in fact most of the time in this boat okay so the implications are when we want to infer reflectance of the property of the object from l we must bring in other information right we must be able to make we must have some way to make guesses about i about the light shining on that object okay so the big point is many many inferences in perception and cognition are ill-posed in exactly this way all right and so here are two other examples of ill-posed problems in perception in shape perception you have a similar situation you have stuff in the world that's making an image on the on the back of your eyes okay that's optics what we're trying to do as perceivers is reason backwards from that image what object in the world caused that image on my retina that's sometimes called inverse optics because you're trying to reason the opposite way that's basically what we're doing in vision so here's a problem like it's a crappy diagram but if you can see here there's three very different surface shapes here that are all casting the same image for example on a retina you could do this with cardboard and cast it with a shadow does everybody get what this shows here what that means is if you start with this and you have to reason backwards to the shape that caused it that's an ill-posed problem big time it could be any of those things this information doesn't constrain it does everybody see that problem okay so that's another ill-posed problem here's a totally different example of an ill pose problem that's that's big in cognition when you when you learn the meaning of a word especially as an infant trying to learn language the classic example the philosophers like god knows why philosophers like weird stuff but never mind um somebody points to that and says gav a guy and your job is to figure out what does gavagai mean okay so gavagai could mean all kinds of different things it could just mean rabbit if you already have a concept of a rabbit it could mean fur it could mean ears it could mean motion if the rabbit is jumping around or in the example the philosopher's love it could mean undetached rabbit parts weird but anyway philosophers like that kind of thing anyway the point is it's ill-posed we don't know from this what is the correct meaning of the word does everybody see how this underdetermines the correct meaning of the word we don't have enough information to solve it okay so um yeah so there's a there's a whole literature on the extra assumptions that infants bring to bear to constrain that problem so they can make a damn good guess about what the actual meaning of the word is okay the whole big literature quite fascinating okay but for now i just want you to understand what an ill-posed problem is and why it's central to understanding perception and cognition okay so back to the case of color as i said the big point is that lots of inferences including determining the reflectance of an object are imposed and so we have to bring in assumptions and knowledge from other places from our knowledge of the statistics and the physics of the world our knowledge of particular objects all kinds of other things must be brought to bear okay so all of that again is considering the problem of color vision at the level of mars computational theory notice we haven't made any measurements yet we've just thought about light and optics and what the problem is and what we use it for okay all this stuff you know what what is extracted and why are the reflectance of an object useful for characterizing objects and finding them what cues are available only l and that's a problem uh because it's ill-posed okay next question so obviously we get around and we can we can figure out what colors are which what are the other sources of information that we might use in principle and that humans do use in practice okay um and so all of that kind of stuff has been done without making any measurements we're just thinking about the problem itself okay all right um so next uh mars other levels of analysis algorithm and representation and hardware are more standard ones you will have encountered which is why i'm making a big deal of computational theory it's really his major novel contribution but it's better understood by contrast with these so at the level of algorithm and representation this is like what is the code that you would write to solve that problem right and so we could ask how does the system do what it does can we write the code to do it and what assumptions and computations and representations would be entailed so how would we find out how humans do this well one of the ways is a slightly more organized version of what you guys just did and that's called psychophysics psychophysics just means showing people stuff and asking them what they see or playing them sounds and asking them what they hear you can do it in very sophisticated formalized ways or you can do it like we just did talk to us about what the world looks like okay usually psychophysics means a slightly more organized version okay so here's an example in fact it's a cool demo also from rosa and so what i'm going to do is i'm going to show you a bunch of pictures of cars and your task is going to be to shout out loud as fast as you can the color of the car okay they're going to appear on the screen everyone ready as fast as you can shout it out loud here we go what color okay interesting okay here's another one uh-huh interesting ready here we go here's another one okay here's another one ah you guys caught on to that pretty fast okay so um good job um nice consensus although i noticed a little bit of transition there which is very interesting um but here's the thing all of those cars are the exact same color the body of the car is the exact same in all of them and if you don't believe it here's i'm going to occlude everything except for a patch okay here we go boom they're all gray i know it's awesome it's rosa that's awesome not me i just had to bum this because it's so awesome okay so rosa spent months designing these stimuli to make a particular test particular ideas about vision but the basic demo is simple and straightforward you can get the point here okay so what's going on here what's going on here is that you guys the algorithm running in your head that's trying to figure out what is the color of that car is trying to solve the ill-posed problem and it's using other information than just the luminance of light coming from the object it's using information from the rest of the object it's making inferences about l the luminance the light hitting the object okay and in particular when you look at that picture up there what is the color of light shining on that car yeah right officially known as teal in the field but some of you shouted out green first because that's what you saw first is the color of light okay what's the color of light hitting that car yeah purple magenta here yeah and over there yeah yellow orange yeah okay so basically what your visual system did is look quickly figure out the color of the incident light l and use that to solve the otherwise ill pose problem of solving for r the color of the car and in this case this demo shows that if you just change the color of the illuminant light and hold constant the actual wavelengths coming from that patch you can radically change the perceived color of the car everyone got that okay yeah if i ran this to a computer yeah asked to get the the intensity of the pixel at like on the hood of the car there it would it would not correspond to yellow across one degree uh well it depends what you're asking the computer exactly if you hold up a spectrophotometer that's just going to measure the wavelength of light they're all gray right there on top of those cars they're all the exact same neutral gray that's just the raw physical light coming from that patch okay but if you coded up the computer to do something smart and you coded it up to take other cues from the image try to figure out what l is and therefore solve for r you might be able to get it to do the right thing i just mean just like you look at the pixels like in the matrix like the color on the car would it be what yellow is great they're all great so that's what that's what i was trying to show you here is that in fact they are actually gray right that's the cars are underneath there and you can see they're all exactly the same and they're gray and there's no color in it okay everyone got that all right okay so all of that is a little baby example of psychophysics what we do at the level of trying to understand the algorithms and representations extracted by the mind to try to figure out what are these strategies that we use to to solve problems about the visual world okay and so behavior or psychophysics or seeing as you just did can reveal those assumptions and reveal some of the tricks that we're using in the human visual system to solve those ill-posed problems okay so in this case it was assumptions about the illuminant that enabled us to infer the reflectance from the luminance okay the third level mar talks about is the level of hardware implementation in the case of brains that's neurons and brains and so we won't cover this in any detail here but there's lots and lots of work on the brain basis of color vision we'll mention it briefly next time so this is some of rose's work showing those little blue patches on the side of the monkey brain that are involved in color vision and some work that rosa did in my lab showing the bottom surface of the human brain with a very similar organization with those little blue patches in there that are particularly sensitive to color so you can study brain regions that do that if it's a monkey you can stick electrodes in there and record from individual neurons and see what they code for and you can really tackle at multiple levels the hardware hardware neural basis of color vision and brains as well okay so the big general point is we need lots of levels of analysis to understand a problem like color vision okay and so accordingly we need lots of methods to understand those things all right so what i want to do next is now launch into this this whole thing about the different methods that we can use in the field in this part of the lecture we'll go on to next time but let's get going everybody good with this so far all right um so we're going to use the case of face perception to think about the different kinds of questions and different levels of analysis and face perception so let me start by saying why face perception not just that i've worked on it for 20 years although i'll admit that's relevant there's lots of other good reasons beyond that why we should care about face perception so i don't have a demo that enables me to kind of put you in a situation where you can see everything but faces that would be cool and informative if we could do that but failing that i can tell you about somebody who's in that situation and this is a guy named jacob jacob hodes so this is a picture of him recently i met him around a decade ago when he was a freshman at swarthmore and he sent me an email and he said i've just learned about face perception and the phenomenon of prosopagnosia the fact that some people have a specific deficit in face recognition and it explains everything in my life and i want to meet you and i said because he knew i worked on face perception i said that's awesome i would love to meet you but i got to tell you i'm not going to be able to help so if you're interested in chatting please please come by um but i don't want you to feel like i'm going to be able to do anything useful he said no i don't care i just want to i want to understand the science so he comes by and by the way one of the one of the things that people have wondered for a while is are people who have particular problems with face recognition are they just socially weird are they just like bizarre like maybe a little bit on the spectrum they don't pay attention to faces and so they don't get them very well and so forth um or can they be like totally normal in every other respect except for just face perception and so i was very interested i'd only emailed with this guy and when he showed up in my office within about you know 15 seconds it's like this is like the nicest normalest kid you could ever meet such a nice guy so normal socially adept smart thoughtful lovely lovely person so i chatted with him for a long time and he told me he was then halfway through he grew up in lynn massachusetts and he went off to swarthmore his freshman year and he had been having a really rough time of it because in his hometown he was with the same group of kids all the way from first grade through high school and so in fact he he just can't recognize faces at all never could when he was a little kid his mom used to drive him to the practice field and they would sit there and come up with cues about this is how you tell that's johnny he's got this weird thing about us here and this is how you tell that's bobby and they would like practice and practice um and so he developed these clues to be able to figure out who was who in his small little cohort of kids that he knew um you know all the way through high school then he goes off to college and it's all these new people and he's screwed and he said to me that he was just devastated because he would go to a party and he would meet someone and think wow this is a really nice person you know they they i would really like to be this person's friend but he would realize he would have no way to find that person again and so the point he's like you know you don't want it like it's kind of like oversharing to say when you've met somebody for 10 minutes like by the way i'm not going to be able to find you you have to find me it's like you just don't want to have to go there yet right so there's all kinds of things that would make it a real drag to not be able to recognize other faces and now having said all of that i'll say that a surprisingly large percent of the population is in jacob's situation about two percent of the population it will be unsurprising if there are one or two of you in here and if there is you can tell me later i'd love to scan you um but um uh about two percent of the population has routinely fails to recognize family members people they know really well right um and so and interestingly this is completely uncorrelated with iq or with any other perceptual ability your ability to read or recognize scenes or anything else yeah that's the kind of thing where you either have it or don't have it oh good question no it's a it's a it's a gradation so the two percent of the bottom are not like this two percent who are really screwed and everyone else is up here it's a hugely wide distribution and the point is that the bottom end of that distribution is really really bad like they just can't do it at all similarly the top end of that perspec distribution is weirdly good they are so good at face recognition that they have to hide it socially because otherwise people feel creeped out like for example as one of those people they're called super recognizers a bunch of them have been hired by um by investigation services in in london recently as part of their kind of crime solving unit those people are so good that um that one of them said to me we scanned a few of these people one of them said you know if i if i if i you know she recounted this event where she's um standing in line waiting for movie tickets and she realizes that the person in front of her in line was sitting at the next table over at a cafe four years before she says if i share this information with that person they'll be creeped out so i've just learned to keep it to myself but i know that was the same person right so there's a huge spread you had a question a while back patient um is that so like for example jacob looking at a person could describe that absolutely he knows that it's a face he can tell if they're male or female he can tell if they're happy or sad um it looks like a face to him he just doesn't look different than anyone else yeah is there any difference like okay like for example my father he can tell faces in person just fine but like when he watches videos of people he just cannot like he cannot recognize faces at all so is there any like difference there are lots of cues i mean that's a very interesting exercise to think about what are the cues that you have in person right you have all kinds of other things first of all there's lots of constraining information the person you're looking at there are all kinds of things you know about where you are and who that might be that help right um so yeah there's many different cues to face recognition that might be engaged here so my point is just that um face recognition matters like you can get by if you can't do it but it sucks it's really hard okay okay so mort yes questions no they see the structure of a face they see a proper face if the eye was in the wrong place they would know they absolutely know the structure of the face it just looks they all look kind of the same by the way there's a we don't have time to talk about this in any detail but there's a well-known effect that probably many of you guys have experienced which is called the other race effect and that is the fact that they all look the same whoever they are if you have less experience looking at that group of people you're less well able to tell them apart okay i have this problem teaching all the time i grew up in a rural lilywhite community my face recognition is not so good to begin with and it's really not good for non-caucasian faces it's embarrassing as hell it feels disrespectful i hate it you know i fault myself but actually it's just a fact of the perceptual system your perceptual system is tuned to the statistics of its input and um and it's not so plastic later in life um and so um a way to simulate a version that some of you may have experienced is whatever race of faces you have less experience with if you find those people hard to distinguish it's not that you can't tell it's a face it's not that you could tell that you would be able to tell if the nose was in the wrong place it's just hard to tell one person from another so it's a lot like that i really need to get going so i'll take one more question and go wait could you like kind of use an analogy it's like being able to tell people apart by like their hands or something like to the point that like you just like you know you can't really tell people apart by like their hands usually so is that kind of how people feel like it's just looking at a body that's all you had yeah yeah probably probably yeah and there is by the way an interesting literature you show people photographs of their own hand and a bunch of other hands people can't pick out their own hand from so yeah you're right we're not so good at that um okay i'm going to go ahead if you guys are interested i could post there's some there's a whole fascinating literature here but actually i got dinged last year for talking about face recognition too much and prosopagnosia we all heard about in the 900 in 900. so i took most of that out and now you guys are asking me so i don't know what the right thing is but i'm going to go on and i will put some optional readings online especially if you send me an email and tell me to do that okay so point is faces matter a lot they matter you know for the quality of life they're important because they convey a huge amount of information not just the identity of the person but also their age sex mood race direction of attention so if i'm lecturing like this right now and i start doing that you guys are going to wonder what the hell's going on over there yeah i saw a few heads turn i'm just doing a little demo here right we're very attuned to where other people are looking okay so there's just one of many different social cues we get from faces they're just an incredibly rich bunch of information in a face um we read in aspects of people's personality from the shape of their face even though it's been shown with some interesting recent studies there's absolutely nothing you can infer about a person's personality from the shape of their face we all do it and we do it in systematic ways another reason this is important and faces are some of the most common stimuli that we see in daily life starting from infancy where i think about 40 percent of waking time there's a face right in front of an infant's eyes and probably these abilities to extract all this information have been important throughout um our primate ancestry so that's just to say there's a big space of face perception and now we're going to focus in on just face recognition telling who that person is all right so what questions do we want to answer about face recognition well a whole bunch of them and what methods do we want to use so let's start with some basic questions about face recognition well first as usual we want to know what is the structure of the problem in face recognition what are the inputs what are the outputs why is it hard right just as we've been doing for motion and color that's mars computational theory level we want to know how does face recognition actually work in humans what computations go on what representations are extracted and is that answer different do we are we running different code in our heads when we recognize faces from when we recognize toasters and apples and dogs okay another facet of that do we have a totally different system for face recognition from the recognition of all those other things if so then we might want different theories of how face recognition works from our theories of how object recognition works how quickly do we detect and recognize faces that'll help constrain what kinds of computations might be going on and of course how was face recognition actually implemented in neurons and brains so those are just some of the big wide open questions we want to answer so now let's consider what are our tools for considering these things and you guys should all know what tools are available for thinking at the level of mars computational theory basically just thinking right you can collect some images too but basically to understand this we just think so for example um as i keep saying at the level of mars computational theory we want to know what is the problem to be solved what is the input what is the output how might you go from that input to that output okay so for example here's a stimulus that might hit a retina and then some magic happens and then you just say julia okay so we want to know what's going on in that magic okay and if a different image hits your retina you go oh brad that is i wouldn't i'm live in a cave but i barely get out of the lab but i understand that these are people most people recognize that's why i use them that's a question what goes on here in the middle um and your first thought is well duh easy we could just make a template a kind of store the pixels that match that image and take the incoming image and see if it exactly matches and that's going to work great right no why not louder yeah yeah absolutely that's not going to work at all and the problem is that we don't just have one picture of julia that we can match there are loads of loads of totally different kinds of pictures of julia all of which we look at and immediately go julia no problem okay and so that means what what is it that we're doing in our heads if we're storing templates we have to store a lot of them okay so um all those differences in the images so we could memorize lots of templates well that has long been taken as like the reductio out absurdum like that's the ridiculous hypothesis how could that be how could there be room in here to store lots of templates of each person um and furthermore how would that work for people we don't know the other idea which is very vague right now is that well maybe we extract something that's that's common across all of those maybe something like the distance between the eyes something about the shape of the mouth of other kinds of properties that might be invariant across those images that is that you could come you could pull out that information from any of those images okay it's sounding very vague because it is vague nobody knows what those would be but the idea is maybe there's some image invariant properties of a face you can get from here that you can then store and use to recognize faces okay so now we can to think about this we can step back and say okay how is this done in machines so machine face recognition didn't work well at all until very recently okay and then all of a sudden a couple years ago as i said here's another paper from the different one that i showed you before this one is vgg face one of the major deep net systems for face recognition it's widely used there was another one the year before all of this since 2014 2015 hugely cited widely influential they're on all your smartphones boom it all just happened like nearly overnight okay with with the availability of lots of images to train deep nets so now these things are extremely effective um and accurate and so in some sense those networks are possible models of what we're doing in our heads when we recognize faces it doesn't mean we do it in the same way but it's a possibility it's a hypothesis we could test okay yeah what is the current state of the literature surrounding getting other information from people's faces like moods lots lots they're like simple you know there's like conferences and um machine vision competitions on extracting you know personality properties mood properties every possible thing you can imagine this is like a huge a lot of people care about a huge field in computer vision and there's also a huge field in cognitive science asking what humans pull from success oh god others would know that better than me i bet it's pretty damn good a lot of it yeah yeah i mean these things are suddenly extremely effective yeah okay and there will be by the way later in the course my postdoc katharina dobbs who knows that literature much better than i do we'll talk about deep nets and their application in human cognitive neuroscience and she knows a lot about the various networks that process face information okay so this is progress now we have some kind of computational model trouble is nobody really has a intuitive understanding of what vggg face is actually doing like you know how to train one up there it is but how do we don't really understand what it's doing and further we have no idea if what it's doing is anything like what humans are doing okay so it's progress that we have a model now that we didn't have like five years ago um but we still have all these questions open okay so on this first question what do we want to know what we've discovered at the level of mar computational theory is a if not the central challenge and face recognition is a huge variation across images right which you know just by thinking about it or trying to write the code okay so um oh i'm just barely able i'm going to race along and anja's going to tell me in five minutes to switch okay um so i want to talk just a little bit about behavioral data i'll run out of time and we'll roll this in last time because i want to include functional mri because you guys need it for the assignment okay so how are we going to figure out what humans represent about faces okay so here we are we consider this possibility that one way to solve this problem is by essentially memorizing lots of templates for each person another possibility is this kind of vague and kuwait idea that maybe there's some abstract representation that'll be the same across all of those how are we going to figure out which humans do well if we're really memorizing lots of templates for each person and that's how we recognize them in all their different guises that wouldn't work for people we didn't know that is you you wouldn't be able to take two different photographs of the same person and know if it's the same person or not right because you could only do this by memorizing everybody get that idea whereas whatever this other idea is it should work somewhat for novel individuals you don't already know here are two photographs same person or different person so now let's ask can humans do this do we store lots of templates for individuals or can we do something more abstract well if we can store if we simply deal with this problem by storing lots of templates for each individual maybe not literally pixel templates but some kind of literal some kind of snapshot then the key test is we shouldn't be able to do this matching task if we don't know that person everybody get the logic here okay so let's try it so this paper a few years ago jenkins at all asked that question so here's what they did they collected a whole bunch of photographs of dutch politicians with multiple images of each politician okay then they gave them to people on cards and they said there are multiple images of each person and i'm not going to tell you how many different politicians are in this deck just sort them in piles so there's a different pile for each person okay i'm going to show you a low-tech version of this i'm going to show you a whole bunch of pictures all in one array and you guys are going to try to figure out how many people are there okay everybody ready i'm just going to leave it up for a few seconds it's going to be lots of pictures your task is how many different individuals are depicted here here we go okay write down your best guess just kind of look around you know okay everybody got a guess okay write down your guess okay how many people think there are over 10 different individuals there one okay how many people think over five yeah probably half of you how many people think over three most of you they're two what does that mean that means you can't do it that means you can't match different images of the same person if you don't know that person pretty surprising isn't it we think we're so awesome at face recognition because most of the time what we're doing is recognizing people we know people we've seen in all different viewpoints and here arrangements and stuff if you haven't see if you don't have lots of opportunity to store all those things and it's a novel face we're really bad at that okay yeah but there's a constraint of time yeah yeah i was trying to make the demo work but you know the way okay so the way they do this task people have unlimited time and they're just kind of sorting them the mean number of piles that people made in this experiment was seven and a half correct answers too okay okay now you might say well maybe those are shitty photographs right okay so here's the control those are dutch politicians they then did the same experiment on dutch people who look at that photograph and in about two seconds say two duh okay so if you know there's nothing wrong with those photographs it's just a matter of whether you know those people or not okay so the point of all of this is that this crazy story that in fact what a lot of what we're doing i'm sort of simplifying here but a lot of what we're doing in face recognition a lot of the way we deal with all this image variability is not that we have some very abstract fancy high-level um representation of each individual face we just have lots of experience with faces and we use that so that if we have a novel face that we don't have all that experience with we're not so good at it i'm going to run out of time so i'll take one question and go on um how do they control for you know the issue you said about like if you don't have experience with like certain races yeah i'm sure whenever you do face recognition experiments you make sure that you know if your dominant subject pool is caucasian you have caucasian faces or whatever yeah if unless it's something you don't understand i'm going to hang around after class you can ask me questions there or you have to if you have to go you can email me because i really want to get through this next bit okay okay so um okay so there we are with that so what this suggests kind of sort of is that whatever we're doing it's something that benefits enormously from lots and lots of experience with that individual maybe it's not literal memorization of actual pixel-like snapshots but it's something more like that than anybody would have guessed before this experiment okay okay all right i'm gonna skip this awesome stuff here okay um okay so uh the the the benefits of actually gonna come back and do that slide next time too we're gonna cut straight to functional mri i'm sorry about this but i just really want you guys to have this background in case you don't you probably do but um so functional mri another cool method in cognitive neuroscience and how would it be useful here okay so first what is it functional mri is the same as regular mri that's in ten probably tens of thousands of hospitals around the world the big advances in functional mri were when some physicists in the early 90s figured out how to take those images really fast and how to make images that reflect not just the density of tissue but the activity of neurons at each point in the brain okay that was big stuff okay early 1990s and so the reason it's a big deal it is the best highest spatial resolution method for making pictures of human brain function non-invasively that means without opening up the head all right so that's an important thing that's why there's lots and lots of papers on it that's why we're going to spend a lot of time on it the bare basics are that the functional mri signal that's used is called the bold signal that stands for blood oxygenation level dependent signal okay what that means is this basic signal uh is blood flow and so the way it works is if a bunch of neurons some place in your brain start firing a lot that's metabolically expensive to make all those neural neurons fire um and so you have to send more blood to that part of the brain so it's just like if you go for a run the muscles in your legs need more blood delivered to them to supply them metabolically for that increased activity and so the blood flow increase to your leg muscles will increase okay well similarly the blood flow increases to active parts of the brain now the weird part of it is that for reasons nobody completely understands the blood flow increase more than compensates for the oxygen use so the signal is actually backwards active parts of the brain have less not more deoxygenated hemoglobin compared to oxygenated hemoglobin and the relevance of that is that oxygenated hemoglobin and deoxygenated hemoglobin are magnetically different in the way that the mri signal can see so the basic signal you're looking at is how much oxygen is there in the blood in that part of the brain and hence how much blood flow went there and hence how much neural activity was there did that sort of make sense i'm not going to test you on which is paramagnetic and which is diamagnetic i never remember i couldn't care less but you should know what the basic signal is right it's a magnetic difference that results from oxygenation differences that result from blood flow differences that result from neural activity more oxygenated and because it over compensates for oxygen for for the metabolic use of the neurons the active parts that you see with an mri signal have more oxygenated hemoglobin right okay all right so that's the basic signal and because that's the basic signal there's a bunch of things we can tell already so first of all um i'm just going to am i going to do this i'm going to skip over this it doesn't really matter because it's all based on blood flow one it's extremely indirect neural activity blood flow change over compensation different at magnetic response mri image right so you would think with all those different steps that you would get a really weird non-linear messy crappy signal out the other end and it is one of the major challenges of my personal atheism but actually you get a damn good signal out the other end and it's pretty linear with neural activity which seems like kind of a freaking miracle given how indirect it is okay but that has empowered this whole huge field to discover cool things about the organization of the brain okay nonetheless are many caveats because it's blood flow the signal is limited in spatial resolution down to people fight about this but around a millimeter there are cowboys in the field who think that they can get less than a millimeter maybe i don't know it's debated uh and the temporal resolution is terrible blood flow changes take a long time think about it you start running how long does it take before the blood flow increases to your calves well if you're really fit it's probably fast but still going to take a few seconds takes about six seconds for those blood flow changes in the brain after neural activity and it happens over a big sloppy chunk of time and so you cannot you don't have much temporal resolution with functional mri does that make sense okay um okay the because it's this very indirect signal that also means that we get it when we get a change in the mri signal we don't exactly know what's causing it is it synaptic activity is it actual neural firing is it one cell inhibiting another is it a cell making protein you know i mean it could be any of these things right so we don't know and that's a problem um and another problem is the number you get out is just the intensity of the detection of deoxyhemoglobin it doesn't translate directly into an absolute amount of neural activity the consequence of that is all you can do is compare two conditions you can never say there was this exact amount of metabolic activity right there you can only say it was more in this condition than that condition okay all right so those are the major caveats nonetheless we can discover some cool stuff okay so let's suppose to get back to face recognition you wanted to know is face recognition a different problem in the brain from object recognition right if if it was you might want to write different code to try to understand it from the code you're writing writing for object recognition it's something you'd kind of want to know okay so here's an experiment i did god 20 years ago anyway simplest possible thing so it's an easiest way i can explain to you the bare bones of a simple mri experiment you pop the subject in the scanner you scan their head continuously for about five minutes well they look at a bunch of faces for 20 seconds they stare at a dot they look at a bunch of objects they stare at a dot okay five minute experiment you're scanning them that whole time and then you ask of each three-dimensional pixel or voxel in their brain whether the signal was higher in that voxel while the subject was looking at faces than while they were looking at objects okay and when you do that you get a blob i've outlined it in green here but there's a little blob there this is a slice through the brain like this that blob is right in here on the bottom of the brain and the statistics are telling us that the mri signal is higher during the face epochs than the object apex everybody with me here which implies very indirectly that the neural activity of that region was higher when this person was looking at faces than when they were looking at objects okay now whenever you see a blob like that really you want to see the data that went into it so here's mine this is now the raw average mri signal intensity in that bit of brain over the five minutes of the scan you can see the signals higher when that person is looking at face in that region when the person is looking at faces these bars here than when they're looking at objects there everyone get that that's what the stats are telling us this is just the reality check of the data that produced those stats okay so now in fact you can see something like that in pretty much every normal person i could pop any of you in the scanner and in 10 minutes we'd find yours okay um now here's the key question does this so far let's suppose you find this in anyone you do all the stats you like it's as robust as you could possibly want do these data alone tell us that that region is specifically responsive to faces no why it not just like that certain arrangement of um features or it could be reacting to the light in terms of reflecting off good keep going what else yes you could do yeah then it might still be faces but it would be different if it's human faces versus any faces we kind of want to know right the code would be different yeah the face is a part of something absolutely where the object is the whole thing what else just objects are simpler or maybe just easier maybe it's just hard to distinguish one face from another and so you need more blood flow it's just really what that thing is that's a generic object recognition system but it has a harder time distinguishing faces from each other because they're so similar so there's more activity everybody get that okay what else i'm going to go two minutes over so if people have to leave if that's okay i'll try not to go more than two minutes over what else yeah yeah as i just said there's all this stuff we get from a face not just who is it but you know are they healthy what mood are they in are they where they look and all that stuff okay so what you guys just did this is just basic common sense but it's also the essence of scientific reasoning and we'll do a lot of that in this class and the crux of the matter is here's some data here's an inference and so your job is to think is there any other is there any way that inference might not follow from those data how else might we account for those data okay you guys just did that beautifully okay so the essence of good science is whenever you see some data and an inference ask yourself how might that inference be wrong how else might we account for those data okay so that's what you guys just did i had previously made a list of other things that might mean it could respond to anything human you would said any kind of face but it could also be just anything human maybe a response to hands any body part anything we pay more attention to anything that has curves in it or any of the suggestions you guys made okay so the crux of the matter and how you do a good functional mri experiment or make a strong claim about a part of the brain based on functional mri is to take all these alternative accounts seriously and so as just one example what we did in our very first paper is say okay there's lots of alternative accounts let's try to tackle a bunch of them so we scan people looking at now three-quarter views of faces and hands and we made them press a button whenever two consecutive hands were the same that's called a one-back task or whenever two consecutive faces are the same by design that task is harder on the hands than the faces so we were forcing our subjects to pay more attention to the hands than the faces okay and what we found is you get the same blob still responding more to faces than hands and so the idea is by showing that we've ruled out every one of those things it's not just any human body part doesn't go to hands oh so anything human it's not just any body part it's not anything we paid attention to because we made them pay more attention to the hands it's not anything with curvy outline okay and so that's just a little you know tiny example of how you can proceed in a systematic way to try to analyze what is actually driving this region of the brain you come up with a hypothesis and then you think of alternatives to the data and you come up with more hypotheses and then you think of ways to test them and we'll do a lot of that in here okay that's what i just said um and i'll just say that there's lots of data since then that region of the brain is actually extremely uh very much prefers faces um and it's present in everyone and next time we will talk about um uh the fact that that looks like it's suggesting that we have a different system for face recognition for than object recognition but there's a lot we haven't yet nailed the case and you guys should all think about what remains okay thank you sorry i was racing there i will hang out if you guys have questions |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 11_Dopant_Diffusion_Review_Atomic_Scale_Models_Profile_Measurement_Techniques.txt | JUDY HOYT: This is-- today's lecture is going to be the final lecture on chapter 7. It's not the final lecture on diffusion. We're going to continue talking about diffusion in chapter 8, once we've talked about ion implantation. But this will finish up chapter 7. So hopefully you're finishing reading that chapter at this point. Diffusion is probably the biggest topic we cover in this course in terms of the overall complexity and the depth of the modeling. So there's a lot of points I want to cover or just try to review from last time. The last lecture was pretty dense. There was a lot of material to cover. So what we talked about last time was that there are these so-called Fermi-level effects, or high-concentration effects, and electric field effects, and that both of those become important when the carrier concentration-- either N over P, depending on how the material's doped-- when either of those carrier concentrations is greater than NI, N sub I, at the diffusion temperature, then these high concentration effects become important. The basic idea of the model that we have is that the Fermi level, as it moves up and down in the bandgap in response to the doping level, depending on how you have it doped, as you're doped more heavily, it tends to increase the point defect concentration as the doping increases. We know we've seen that in one of your homework problems. You had to calculate that. And the increase in point defect concentration then increases the diffusivity of a dopant. And this leads to, or can lead to, generally more box-like shape profiles because as the dopant diffuses, as you get out to the edge of that profile, the concentration is dropping. And as the concentration of the doping is dropping, the Fermi level is moving. The total point defect concentration is going down. So you tend to get these very steep fall-offs in your profiles. So in addition to this, this leads to this concentration-dependent diffusion. We talked about effects like oxidation-enhanced diffusion, which is abbreviated OED, oxidation retarded diffusion, ORD, and the growth or shrinkage of stacking faults. And these phenomenon can be explained by not by looking at Fick's law per se or whatever, but by looking at atomic scale diffusion models. And in fact, what we talked about last time is that based on the impact on stacking faults, that oxidation, people believe, injects excess silicon interstitials into the bulk, while thermal nitridation, which, again, you may not be as familiar with nitridation, is basically the reaction of the silicon surface with a gas-like ammonia to create silicon nitride. That process is believed to inject vacancies into the bulk. And based on monitoring these processes, people believe that boron and phosphorus diffuse primarily with interstitials. So they have an F sub I number in the fraction interstitial contribution to diffusion of 1. And antimony diffuses primarily with vacancies. So F sub I is pretty close to zero, maybe 0.02. So today I wanted to review these atomic scale mechanisms because we kind of breezed through them at the end of the last lecture pretty fast. And then I want to go on to some new material and just give you a brief example, or a brief look at one example, of how the point defect gradient-- we talked about this last time-- that not only does the dopant gradient, D arsenic by DX, but the point defect gradient, for instance, the interstitial gradient, can actually drive diffusion in a way that actually looks like uphill diffusion. So it's kind of a nonFickian phenomenon. And this leads to a phenomenon in MOSFETs that's called the-- in CMOS [INAUDIBLE] what's called the reverse short-channel effect. And I'll just introduce that. We'll talk about it in subsequent lectures in more detail. And then the second half of the lecture, we're going to talk about profile measurement techniques. How do we measure all these complex profiles? What's the best way to do it? OK, so let's review a little bit on this atomic scale mechanism. And what I would encourage you in thinking about the atomic scale mechanisms, rather than just looking at this particular, very static lattice and imagining how atoms might move, maybe you sit down with a piece of paper, start out and draw yourself a simple two-dimensional lattice. Draw silicon atoms everywhere except one point, which you call your vacancy. Draw a blue atom, or a pink or red or whatever, which you call your arsenic doping, or whatever it happens to be. And draw a couple of time steps for yourself. And see if you can figure out how this arsenic atom, this dopant atom, A, and the vacancy together, by staying in close proximity to each other, can move throughout the lattice. Give yourself a couple of time shots. And I think that will help understand how these things move together. So in this vacancy-assisted model or mechanism, what we do is we say A, where is A dopant-- it could be boron, phosphorus, arsenic, antimony-- we combine that with a vacancy, which is pictured here, an absence of a silicon atom. And it forms a pair, AV. OK, so you say, well, that's fairly simple. What's unique about this? Well, here A is a substitutional dopant atom. This pair AV is mobile. The substitutional dopant atom A is immobile. So what we're saying is that dopants in silicon do not move. They do not diffuse. This is the assumption, unless they are paired with either a vacancy or interstitial. But if they're just in the lattice and there were no vacancies around, no interstitials around, the dopant won't diffuse. So this is a hypothesis that we're making. And we use it to basically model all the diffusion of dopants in silicon. So here's the example of being paired with a vacancy. Obviously the dopant here could hop into this vacant site. The vacancy can move up here where the dopant was. Now what happens? Well, the dopant can stay put for a moment. The vacancy can move over here to this lattice site. And a silicon atom can move there. The vacancy can continue. It can move right here. And this silicon atom would move up there. Now I have the vacancy at this spot down here at the bottom and the dopant atom next to it. Well, now they can exchange places. And the dopant atom has moved here. So it moved from that point to that point. And how it got there was just by a couple of steps of the vacancy hanging around nearby as a pair, essentially. And that's how the dopant atom moved. Without that vacancy, it couldn't have. Similarly, there are interstitial or interstitial-assisted mechanisms. And we write this as a chemical equation. Substitutional atom A, dopant plus silicon interstitial forms an AI pair. And it's only this AI pair that can move. And this is an example. There are a couple of examples here. One is called the kick-out mechanism where you have a pure interstitial, a silicon atom that's actually in the interstitial spaces. It's an extra atom in the lattice. And it can come along and it can kick out that dopant atom. And then they exist as a pair. And then the dopant can kick out a silicon atom. And so they kind of move along, you can imagine this pair, always one of them-- either the dopant or the silicon atom-- being off-site, so to speak. That's one mechanism. The other is an interstitial C mechanism. It's a little bit-- the distinction is a little bit different. The interstitial mechanism is this atom that's here in the interstitial space is presumed to be not bonded, not covalently bonded in any way. In the interstitial C mechanism, the idea is that a dopant and a silicon atom are kind of both bonded and both sharing partial bonds. They're sharing a substitutional site, if you want. There's an extra atom there because there's only supposed to be one atom at that site in the lattice. There's two of them. One of them happens to be silicon. One happens to be a dopant. And they can go along sharing. This can share, and then the dopant atom can then share with its neighbor and could diffuse perhaps along bond directions. So this is an interstitial or interstitial C assisted mechanism. Again, unless it's paired with one of the point defects, we assume that a substitutional dopant atom otherwise cannot move. OK. So last time we also talked about this picture. I've shown it several times now, but hopefully you start to become comfortable with it. What this exemplifies is that how we use a process going on at the surface in this field to probe the atomic scale mechanisms. The problem is nobody can really see interstitials. They diffuse too fast. There's no real way of looking at them with a microscope or something like that. That's a problem. So what we do more often is subject the surface to some kind of a process that we feel either injects interstitials or injects vacancies. And then we see what happens somewhere at a distance in the crystal to infer, or to probe, what we think is going on. Admittedly, it's indirect. But if you build up enough evidence, people believe they can make a case that this is what's probably going on. So there are two service processes that are very commonly used. First one is oxidation. And it's usually done locally. In order to do an OED experiment, you typically don't put one wafer in a furnace and oxidize it and put a second wafer in a furnace and have it under inert because first of all, the furnace may not be the exact same temperature every run. We know they're not perfect. Temperature may be slightly different. That'll screw up your whole experiment. So typically, instead you start with a single wafer to do an OED experiment, and you coat part of the wafer-- maybe one dye area or one small area of the chip. You coat it with silicon nitride so it won't oxidize, which is a good barrier to oxidation. And you have down here a buried layer, some distance-- maybe half micron or so. Could be below the surface. A marker layer of some dopant that's going to diffuse. And you're going to measure the extent that it diffuses. So the dark green here is meant to represent the dopant when it starts out at the beginning of the oxidation. The light green is the edge or the junction as it appears after some time of oxidation. And you can see underneath the region where you were doing locals, where you were locally oxidizing the surface, you get a large spread of the boron layer. And this light green layer is very wide. Underneath the inert surface-- this is called inert because there is no oxidation taking place-- the boron doesn't diffuse nearly as much. So this is inert diffusion. This is OED, oxidation enhanced, diffusion. And also, schematically, in this cartoon, underneath the area where you're oxidizing, the stacking faults, these defects, are growing because they're adding silicon interstitials to the ends of these defects. And here they're just staying put or maybe even shrinking, but they're certainly not growing. So the key point is that people see the enhancement of boron diffusion, the oxidation enhancement, under the same conditions that caused stacking faults to grow. And stacking fault growth is interpreted to be injection of interstitials because people from materials point of view believe that the way you make the stacking fault grow is to add interstitials. So that's sort of the key piece of evidence that links. It's sort of like a criminal case. You really can never find the fingerprints, but you have some circumstantial evidence, and you put it all together, and you convict the person on trial. It's kind of like that. The circumstantial evidence-- and it's reasonably strong-- is that stacking faults are growing when you're oxidizing. Boron is also being enhanced. Aha. Well, interstitials cause stacking faults to grow. Therefore, the interstitials are probably causing boron to diffuse more rapidly. That's kind of how the logic goes. Similarly, people have found that nitridation-- so that's the reaction of silicon with ammonia-- has exactly the opposite effect. The boron actually diffuses slower. It's retarded, as we say in its diffusion. And the stacking faults actually shrink relative to inert anneals. So people feel that nitridation injects vacancies. And that's how those simple arguments go. So what we say when we want to get to an atomistic level modeling-- and this is the type of model that when you're modeling the diffusion in SUPREM, we say that the dopants diffuse with a fraction, F sub I. So F sub I goes from 0 to 1 of interstitial type mechanism. And a fraction F sub V, which is 1 minus FI, of vacancy type mechanism. So we break down its diffusion coefficient into separate terms. There's one term, which is proportional to F sub I times CI over CI star. So again, F sub I, it goes from 0 to 1. If you believe the dopant only diffuses with interstitials and vacancies are insignificant, you would make F sub I equal to 1, like the case of boron or phosphorus. And then this term becomes negligible. If you believe that it only diffuses with vacancies like antimony, you'd make F sub V close to 1. If you think it's both, well, then you apportion it accordingly. And how do you figure out F sub I and F sub V, those proportions? Well, you subject the dopant in the silicon to different circumstances of injecting vacancies or injecting interstitials and seeing how much the diffusivity goes up or down under those conditions. And that's what people have done over the years to try to nail down F sub I and F sub V. So the DA here is the effective diffusivity. Now, it's being measured under any condition, but particularly under conditions where the point defect populations are disturbed or perturbed. So this would be during oxidation or nitridation. DA star. What does that mean? That's the normal equilibrium diffusivity of the dopant, measured under inert conditions. So again, when we're talking about diffusion and we're talking about inert, what we mean is no oxidation, no nitridation. Inert means nothing that injects excess point defects into the bulk. So by using this equation, you can understand how to model OED. What happens in OED is this term, CI over CI star. The interstitial equilibrium-- the interstitial population relative to what it is in equilibrium goes up by some fraction, some amount. Could be five, ten times higher. That causes the diffusivity to be enhanced by 5 to 10 times. In addition, because of recombination-- interstitials and vacancies can recombine-- if you pump up the interstitials, the vacancy is going to go down. So this term can actually go down to a certain extent. And that's how the model accounts for OED or for nitridation retardant diffusion because nitrogen does just the opposite. It enhances the CV over CV star and suppresses this term. And depending on your dopant, your FI and your FV values, it will have more or less of an effect on diffusivity. And that gets modeled in the simulations. I actually pulled out of the literature some data just to show you some of the classic experiments that took place quite some time ago on people trying to determine F sub I and F sub V. And this is a paper by-- it's a review article. It's quite long. It's many pages by Fahey, Griffin, and Plummer. It's getting older. It was published back in 1989, but that was sort of at the height of people really coalescing all this data on these atomic scale mechanisms. So it's a good review article if you want to understand in more detail than is given in your text about this. Since that time, of course, there have been new discoveries, and we'll talk about those newer discoveries when we talk about TED. But what people did was-- this is particular data I took from Fahey's article. This is an experiment on nitridation. So you're flowing ammonia at very high temperatures. Ammonia doesn't react at as low temperatures as oxygen does with the surface. So you kind of have to go to pretty high temperatures. That's one disadvantage of nitridation. So you typically see nitrogen experiments-- 900, 1,000, 1,100-- in order to get reasonable injection. So under nitridation, 1,100 degrees c and there are three different dopants studied here-- phosphorus, arsenic, and antimony. And this axis here shows the time averaged because they do diffusion experiments for a certain amount of time. Here you could do it for half an hour or five hours, 10 hours, 20 hours. And you take the time averaged enhancement, or time averaged diffusivity, that you get averaged over that time interval. And you divide it by DA star, so the equilibrium, the diffusion coefficient measured right next to it in the stripe right next to it in the inert case. So if I go a couple slides back to slide 3, so the DA star they measured right over here by measuring the amount of diffusion underneath this region, the inert region, and the DA time average they measured over here under either oxidation or nitridation. This particular one is nitridation. And so if you look at these dopants-- look at antimony here. What you see pretty much at all times, DA over DA star is enhanced under nitridation. So antimony has a strong dependence on the excess vacancy population. And that's how people came to eventually give antimony an F sub V number that's close to 1. Arsenic has some effect, but not very much. What happens in the case of phosphorus? Actually, we do nitridation and we inject vacancies. Phosphorus diffusion over time is actually slowing down. It's actually retarded? Now, how can that be? Well, let's say phosphorus doesn't diffuse by vacancy mechanism, but it diffuses almost entirely by interstitials. As I inject vacancies you say, well, then it shouldn't have any effect, but it does via recombination. Excess vacancies recombine and they reduce the interstitial population lower than it would be in equilibrium. And that lower interstitial population lowers the diffusion coefficient. And so for phosphorus, this gives us a hint that F sub I is probably pretty close to one. Then you can do the analogous experiment with these same dopants with oxidation and see how it reacts during injection of interstitials. And between those two experiments, you try to get an estimate of what this F sub I and F sub V value could be. So those are some classic experiments people have done. And in fact, also on the next slide, slide 6, I've taken a table from that paper that just summarizes at that time what was known at the state of the art of the interface processes at the surface and how they affect the diffusion of different dopants under different conditions, and also how they affect stacking faults. There are three columns here. The first and the last, we've already talked about. Oxidation injects interstitials, causes vacancies, population to go down, stacking faults grow. The right column in nitridation, interstitial population goes down, vacancies go up, and stacking faults shrink. And for either one of these columns, you can see what happens, say, to phosphorus and boron diffusion. Now he's sort of broken this out to intrinsic diffusion when n is less than or equal to NI, or extrinsic when it's higher. In either case, when you do oxidation, phosphorus and boron are enhanced. Antimony is a little tricky, actually. There's a little bit of enhancement initially. But overall, the effect is believed to be retarded or slowed down. Arsenic is a tough one, because again, its F sub I value is going to be close to half, it turns out. It can be enhanced to a certain extent, depending on if it's intrinsic or extrinsic, it can be retarded, so it's a little bit tricky under oxidation. Arsenic also is enhanced under nitrogen. So it's tough. That's why we believe that F sub I and F sub v are somewhat equally weighted, depending on the amount of enhancement. It's enhanced in both cases. Boron's easier to understand because it's pure. It's enhanced under oxidation, but it's retarded under nitridation. So people have concluded based on a lot of data that the F sub I is close to 1. Now, there's one other column in the middle that we didn't talk about in this class, and I don't think it's mentioned much in your text. But Fahey talks about it in this article. It's called oxynitridation. It's a little bit trickier. Oxynitridation refers to the fact that if you start with a thin oxide-- so you have a thin oxide on the surface, and then you nitride that. You subject it to a high-temperature ammonia. You are doing something called [INAUDIBLE].. You're growing something called oxynitride. It's not pure nitride. It's not pure silicon dioxide. It's kind of got both silicon oxygen and nitrogen. And so people believe-- because stacking faults, grow people believe that interstitials are injected and that vacancy population goes down. But it's another marker, another process people use. It's a little bit harder, in some ways, to interpret. So that's a summary of some of their classic data. If I go on to slide 7, now I want to talk about what I think are some very clever experiments. What we've talked about so far when I've been showing this cartoon, I've been showing talking about one-dimensional. So I've been assuming I'm very far from the interface between the neutral or the inert ambient and the reactive surface. So far I haven't talked about what happens near the interface. But you know as you're injecting the interstitials here due to the oxidation, they don't just diffuse straight down. They diffuse out in a two-dimensional fashion. They diffuse down, sideways, over to the edge. So they're actually diffusing in different directions. Now, if you're far from this edge, you don't see much effect of the interstitial. They primarily go down. But if you're near the edge here, you're going to see some edge effects. And in fact, depending on how far from this stripe the enhancement of the oxidation occurs will give you some idea of how rapidly the interstitials are diffusing in this lateral direction. But there's another process besides just the fact that the interstitials are diffusing around. So they're going vertically and laterally. One other process is recombination that we have to consider. So in fact, the interstitial flux into the surface is the difference between the generation rate. There's a certain number of interstitials being generated per unit time. That's G. There's a certain number of interstitials being recombined at the surface per unit time. That's R. The net flux of interstitials injected is-- you can think of the net flux is if G and R are fluxes, it's G minus R. OK, now, the problem is-- and we know that the generation rate here is proportional to the oxidation rate. But the question is, in your mind, the recombination rate at this point here at this interface is not necessarily the same-- at the active oxidizing interface is not necessarily the same as the recombination rate at the inert interface. For one thing, this interface is changing. It's continuously reacting and oxidizing. So it's not clear that R should be the same here as it is here in the inert. And in fact, it's not necessarily the same. Depending on the reaction, it can be quite different. So when in SUPREM, you'll see various parameters. There'll be a diffusivity for a silicon interstitial. You need to know that. There'll also be a recombination rate. Sometimes they call it K sub S. There'll be a recombination constant associated with this oxidizing interface. And there'll be another recombination constant associated with the inert interface. And they'll also be bulk recombination. We need to know those three parameters if we're really going to understand what this lateral extent of the oxidation-enhanced diffusion looks like because after all, if I'm getting a lot of surface recombination over here, interstitials injected, basically, they all get sucked into that surface and recombine. So the net excess interstitial population will fall off more rapidly. And here's an example on the right of a test structure that people came up with to try to study two-dimensional effects. And I think it's kind of a neat test structure. This is in cross-section now, so it's a little tricky. But I'm looking at three different cross-sections. So this is the surface of the sample. This is a phosphorous junction that was initially diffused in. So it has a starting junction depth, say of half a micron, whatever. Phosphorous diffused into lightly doped boron. And they put stripes on the sample. And the stripes are masked such that the open region where they cut away the nitride, these open regions, these open stripes, are getting smaller and smaller as we go from left to right. So here's a pretty wide open region, narrower, narrower, until it gets to the point where it's a very narrow open region. So the nice thing is, what you're doing is you're sort of changing the region over which you inject these point defects. And then they did the oxidation. They did the oxidation of phosphorus. And what they see is the junction depth now, which initially was flat because it was diffused originally before they put the stripes down, after they put the stripes down and they do oxidation, right underneath where the oxidation takes place, of course you see a big enhancement of the phosphorus. That's OED. But interestingly, look at the shape as you go to different opening widths of these open stripes. The overall junction depth kind of reaches the same point here, regardless of the width of the open area. It's changing a little bit in shape. But nevertheless, it reaches the same junction depth. So what that's saying is that at this point down here, the interstitial concentration or supersaturation at the little tip here is about the same as it is here. It's about the same as it here, as it is here. And so it's not that much perturbed by the presence of, all around it, these interfaces, these inert interfaces. So the recombination probably at the oxidizing interface and at the inert interface, those rates are probably fairly comparable when you're talking about interstitials. But let's look at the other case. Here's another example. They did an experiment like that, but instead they started with an antimony junction. So this initial starting junction depth looks similar, but it was antimony. And what do they see as they decrease the opening width? In fact, the overall junction depth, even right below the opening, directly below it, actually goes down. This is for nitridation of antimony. Unlike here, where these tips all stay-- the end of the junction was the same regardless of stripe width. Here it's actually going down. So what's that saying? That's a two-dimensional effect. That's actually saying that directly below this opening where the nitridation is taking place, locally the supersaturation of vacancies is actually smaller than below an opening that's much wider. So these supersaturation of vacancies must be being impacted by recombination that's taking place on either side of that opening. And in fact, people believe that the vacancy recombination rate here at these inert interfaces is a lot faster than it is at the nitriding interface. And so the vacancy supersaturation level is actually impacted at the center point by what's happening all around it. So it's a two-dimensional diffusion and recombination problem. And there are a number of parameters. There's the diffusivity of the point defect. There's recombination at this surface. There's recombination at the reactant surface. And there's recombination in the bowl. So it's kind of a neat way of looking at things, even with a one-dimensional test structure-- I mean, all we have is one dimension here that we can measure, essentially, the junction depth. But we can get two-dimensional information by changing the stripe, period. So actually, if you go on to slide number 8, I also took this from that same paper by Fahey, Griffin, and Plummer. And this is a little more quantitative description where they've actually done diffusion modeling of this two-dimensional diffusion problem, two-dimensional diffusion of the interstitials being injected and recombining, and what effect they would have on a dopant diffusion. So it's a little bit tricky here, but what you're doing is a local oxidation. Out here on the wings there's nitride, which is hatched region. Underneath the nitride that's inert. That's an inert interface. So that's masked. Now what they did was they're looking at simulations for case A and B. And case A is when the mask opening is fairly wide. So the mask opening in case A goes from here to here. Case B is you imagine the opening in the mask to be very small. So the region that's being oxidized is much narrower. And they did the simulations for the two cases, A and B. So A is the case, these two curves for a wide opening, and B is for a narrow opening. But there are two different types of curves here. There's the solid and the dashed. In the solid curve, what they assumed in their simulations is that they assumed the case of S value or the point defects were recombining more slowly at the inert interface compared to the oxidizing interface. So they're saying there's not that much recombination over here at the inert interface compared to the oxidizing. When they do that, when they adjust those surface recombination velocities in that way, what they see is when you change the stripe width from narrow to wide, the opening, you get profiles that look like the solid lines where at the very center of the stripe, the junction depths are almost the same, which is the case, if we go back, just go one slide back to slide 7, it almost looks like this phosphorus case in oxidation. The junction depths, regardless of the width of the opening, were about the same. And so you can actually fit this shape, this shape right here, to the experimentally measured shape by changing the ratio of the surface recombination velocity at this oxidizing interface to in inert. In the dashed lines, what did they assume? Well, the dashed lines show the case where the surface recombination velocity of interstitials at this oxidizing interface is about the same as the non-oxidizing. So they made them equally. So if it recombines equally in this interface versus that interface-- in fact, what you see for a narrow stripe is that the overall junction depth is much lower than it is for a wide stripe. So the enhancement is much less. So actually, people use the shapes of these junctions as a function of stripe width to say something to infer about the recombination velocities at this interface versus at the inert interface, just by changing the duty cycle. So this is how, in SUPREM IV, if you look at some of those coefficients, this is how they were actually measured. It's kind of a clever experiment. Slide 9 is actually showing you some experimental data, again, just to give you a feel maybe from a different vantage point. It's a little bit hard to see this. But this is a photograph from a microscope. And you got to get used to seeing this. It's been beveled and stained. So this surface up here that you're looking at is the top surface of the chip. This surface down here is the bevel. So if, actually, I could find a piece of chalk-- there happens to be one here, which is kind of rare. So what they do is, this is the top surface. This region here has been beveled at some very shallow angle. And so the top surface here has the stripes on it. So in the microscope, if you look at the top surface, it looks like this. So these are the regions here where the oxidation-enhanced diffusion took place. Here's a 50-micron opening, 25, 20, 10, all the way down to 4-micron opening. This phosphorus, this region, it turns out you can chemically etch the silicon surface, and it stains. So you get a different surface appearance under the microscope where it's N type. And so you use. This is a really old-fashioned way of doing it. But beveling and staining the silicon was a way to measure the junction depth because you could actually measure it-- basically by doing this at a shallow enough angle, you can spread this out over a long distance, a distance such that you can see it in an optical microscope. So this was just to give you an idea of what the junction actually looks like. Under a wide opening, looks like this. Under a narrow opening at 4 microns, it's about the same depth. So this was experimental evidence that people used and then fit the SUPREM profiles. We go back one shape, they fit to that shape. And they found that the model looked more like the solid line model. So they could actually fit the recombination coefficients based on some of this data. So it was a relatively-- it was a clever experiment, relatively simple techniques-- oxidizing, patterning, and beveling and staining were used in some of these original experiments, then at high temperatures. OK, so that's to give you an idea of how some of these experiments were done originally. Now, let's talk about-- I want to talk about a specific example that we kind of whizzed through last time when we were doing towards the end of the class notes. Let's talk about boron diffusion. And we're going to say that boron diffuses based on all the data that people have with only two point defects. Of all the point defects it could have as pairs, these are the two that it prefers. It prefers a neutrally charged interstitial, silicon interstitial, which we write as I0. And it pairs with that. And so we have a B, a boron minus. Again, that boron substitutional in the lattice is an acceptor. So it has a net negative charge when it's substitutional lattice. It combines with a neutral interstitial and forms a pair, DI super minus. Minus because the net charge on that pair, if you were consider it as a pair, is one negative net electron charge. So it likes to do that. Or boron might like to pair with a positively charged interstitial, particularly because the concentration of these interstitials tends to go up in high doping concentrations. But even in low doping concentrations, you can see there might be some kind of coulombic interaction. A boron minus might pair with a positively charged interstitial and form a BI pair. The pair charge of this particular pair now is actually neutral as a pair. So we have two of these simple chemical reactions. And they give rise to fluxes of mobile species. So remember originally, on the left-hand side, the boron is immobile. One of these guys comes along, pairs with it. Now it's mobile on the right-hand side. And we sum these two, the flux of these two. And we say the total flux of boron in the sample is going to be the flux of the BI minus pair and the BI neutral, uncharged, pair. So that's an example of how SUPREM might consider this, at least from a chemical equation point of view. So given that, those simple two equations, and all that we know about deviations from Fick's law-- I showed this last time, but again, I think we went through it a little too quickly on slide 11. I just want to show, again, the overall equation that SUPREM IV is solving for boron, assuming it's diffusing with just two species with neutral interstitials and with positive interstitials. This is the actual flux, so-called diffusion equation, that SUPREM is solving. And it's a far cry from what you would think in a simple case. You would think it would just be DC boron, or partial T, if you want, is equal to a diffusivity times partial concentration of boron with respect to X. This is what you might think. That's the simple version of boron diffusion when we started this chapter 7. It's just regular old Gaussian type diffusion. That equation looks pretty simple compared to that one up on slide 11. And this is actually what SUPREM is solving. So where do all these terms come from? Rather than deriving it, let's just sort of examine and see if we can understand based on what we've talked about, what the different effects are that are causing it to have this large term here in curly brackets. Well, the first thing you can notice is the concentration of boron with respect to time is the D by DX of a flux. That does hold. Everything in curly brackets here is represented by a flux. The question is, why does the flux look so complicated? Well, the first part of the flux, there's a term that depends on DBI star. So what is that? That is inert, low-concentration diffusion of boron that's driven by the boron gradient itself. So that would be equivalent to this D up here. So that's sort of the simplest part of the equation. But multiplying DBI star, there is this interstitial supersaturation coefficient. CI over CI star appears right here. And again, that's because we're trying to take into account here these non-equilibrium effects. This was sort of an equilibrium diffusion, but of excess silicon interstitials, and that we have pair diffusion. So there's CI over CI star here. And it's gradient is in here, as well. There are high concentration effects on the dopant diffusivity. So this you recognize this thing in parentheses, it's 1 plus beta P over NI divided by 1 plus beta. That whole thing is Fermi-level effect. That's the high concentration effect. Remember, beta was the ratio of boron diffusing with the I star divided by the ratio diffusing of neutral interstitials. And so as this kicks in, as P over NI gets large, it's going to bump up the diffusivity, the effect of diffusivity. So if you want, you can think of all these-- there's a lot of these things here upfront, these three terms, as all multiplying the inner diffusivity by some number that's going to pump it up, or maybe pump it down if the CI over Ci star goes down under injection of vacancies. And this last bit here, this partial x of the lin of everything in parentheses, that came-- you covered last week when Maggie was lecturing. That's the electric field effect, just the fact that besides just the gradient of the concentration gradient that drives diffusion. We also know electric fields. We can have a field-aided term. So that's where this last term is coming from. So you can get a feel for where all these terms are coming from. It's a fairly-- you can't imagine doing this by hand. To solve this equation, obviously you're going to have to do something on the computer, something numerically. OK, so let's go to slide-- so that kind of gives you an example of the boron case. Now I want to give you an example of the case which is a little bit strange, puzzled people for a long time, but now is understood as an example of the impact of the gradient in the point defect. Not the gradient in the dopant itself, but a gradient in injected point defects. How are they going to change the device dopant profiles? And we're going to talk about this in much more detail once I talk about ion implant damage. But just want to introduce it at this point to make the point that the gradient in the point defects can drive diffusion, as well. There is something called the reverse short channel effect that puzzled people for quite a bit in the early 90s and ended up being explained, this electrical effect in devices being explained by this boron pair diffusion, boron interstitial pair diffusion. So before we talk about reverse short channel effect, which sounds really weird, how about what's the regular or the usual short channel effect? We talked about it a little bit several lectures ago. But basically, the usual short channel effect is for a given process, the threshold voltage goes down as you decrease the L. So for a given process on chip, as you look at smaller and smaller devices made on chip and you plot the threshold voltage, what you'll see is it rolls off. Generally the VT goes down. It becomes easier to turn on the device. And here's a textbook if you want to go through in understanding some of the physics. But basically, what happens is as you bring the source and drain closer, the potential due to the drain actually starts interacting with the potential in the channel, and it starts having an effect. Ordinarily you'd like that not to be the case. You'd like to have just the gate have the only effect. So the lowering of the potential barrier by the drain voltage is what causes this to go down as you shrink the channel length. And how much that potential barrier can be up raised or lowered is a strong function of the profile of the boron, say in an NFET, underneath the channel. So in fact, if we go to slide 13-- just, again, I took this from Tower and Ning's book on fundamentals of modern devices for the device physics. Again, you don't have to understand the detail, but just gives you an idea of where this comes from. Imagine here I have-- this is my source, my gate up here, and the drain over on the right. And this distance, 0, is right at the source injection point. And L is the channel length. So that's right at the drain. And what he's plotted here in this book is the surface potential. So that's the potential at the surface that a carrier would experience as a function of distance along the channel. So here at 0, you're just starting in the source. And it looks something like this. Curve A is for the case of a 6-micron device. Curve B is a 1.25 at, say, half a volt. And curve C, when I put a drain bias of 5 volts, what happens? Actually, the potential, even in the center of the channel, instead of just being the potential of being lowered in the drain, it's actually because it's a short channel the potential is actually lowered here. And so it's that ability of the drain voltage to impact things that is a short-channel effect, and is partly responsible for the lowering of the VT. So the way people-- when you scale devices shorter, the way people counteract this is they dope underneath the channel. They counter-dope it more heavily with the opposite dopant. So they add more boron, for example. If you were to add more boron, there'd be much less of this effect of this potential being impacted by the drain. So there's a tendency, as you scale devices smaller and smaller, you'll see the doping in the channel. If you go on the ITR roadmap, every year it goes up. mid 10 to the 17th, 20 to the 18th, mid 10 to the 18th as we shrink devices. So people counteract this short channel, the normal short channel effect, by upping the doping in the channel. But what people saw was the reverse short channel effect, which was confusing people, is that for short channel lengths on the chip, it was actually found that the threshold voltage actually increased in a certain range of channel length. So as they shrunk the channel length in a certain range, VT actually went up, almost as if the doping in the channel was getting higher in that range. So all these devices are fabricated on the same chip, subject to the same temperatures on the same wafer, same iron implants, everything. Why would it be the doping would be different in the center of the channel, depending on the channel length? And people were mystified by this for a while. So the reverse short channel effect basically looked something like this. And this is kind of a backwards plot. But if you plot threshold voltage-- now this is on the right axis. Inverse channel length is increasing this way. But if you want to look at the upper X-axis, it's helpful. The channel length is here on the right going from 0.2 all the way up to 1 micron. Or if you're shrinking the device, you're going from here to here. So as I'm shrinking down here from, say, a 10-micron device down to 1 micron, what was actually happening from 10 to 1 micron? Actually, it's easier to see it if you look at the closed squares from the experiment. The VT was actually going up from about 1.1 volts up to 1.2 or 1.25 volts, which is a significant increase. So you're shrinking devices from 10 microns or 5 microns down to the third of a micron. VT is going up. That's the opposite of what everyone expected from the old days from the regular short channel effect. So people call this the reverse short channel effect. And in fact, people were able to-- it turns out there's a paper in 1993 at IEDM from Rafferty. They were actually able to explain this based on simulating what they thought the boron profile would be in this nMOSFET in a case where there was transient enhanced diffusion. And we haven't yet talked about TED. But basically, what they found was that the source drain implants were injecting excess interstitials and setting up fluxes of interstitials that were then driving the boron, which was originally buried, driving it closer to the surface. And if you made the channel shorter, this effect was even greater, basically, because you're bringing in-- the center of the channel is being brought closer as you change the channel, closer to the source drain regions where the excess interstitials were being pumped in. But in either case, whether it's OED or TED, the point was that they were able to explain based on these diffusion models that more boron, more p-type dopant, was ending up in the channel region of a short device, say of 0.3 micron device, than would be in a 1-micron device. More boron means instead of the short channel effect and VT rolling off-- actually, there was so much extra boron that VT was going up. So the electrical engineers and the circuit people who would see these weird VT variations with channel length and didn't know about the processing were mystified as to how this could be happening. They just assumed, well, the boron concentration is the same on a short device and a long device. Well, actually, it's not. It can vary. And of course, that means you need these more complex models that haven't taken into account point defect injection from the sides in order to be able to model the boron profile accurately and to get the device VT right. So that's sort of a classic example of how these complex models impact device performance. And in fact, here's a cartoon explanation on slide 15 where we-- this is a two-dimensional SUPREM simulation of what's going on in the reverse short channel effect. And what's being shown here is a relatively short device, short channel length. And the different colors are the different contours corresponding to constant doping concentrations. And if you just for a moment imagine, this is the drain over on the right, you see this little region here is the drain extension. And down here is the deep drain. Here's the source extension and the deep source extending down to about this blue-colored region. These arrows that are emanating from the source and drain, they are interstitial fluxes coming from the surface of the source and drain. These interstitials, people believed in this particular model, were being injected due to the damage due to the ion implant. And we're going to talk in the next few lectures about ion implantation and how it damages. But just take that with a grain of salt, that you believe that there was a process that introduced a lot of excess interstitials only in these regions, not where the gate was. So these interstitials were coming in and they were diffusing all around. So these arrows represent interstitial fluxes. And they're recombining at the various interfaces. Now, look at this interstitial flux here that goes like this. This arrow points this direction and goes up towards the surface. So this interstitial flux, this gradient of interstitials where there's a high concentration here and going down at this point. That gradient of interstitials was actually dragging the boron with it. It was actually dragging the boron with it and moving the peak boron profile from where it originally was more deep in the sample, moving it up towards the surface. So in a short device, it was causing the boron to be higher at the surface. And so this is a two-dimensional representation. If you take a cut straight through the center, right here at x equals 0 and look along the y direction, in the vertical direction, you can see three different curves on this plot. So this is a plot right through the center, the concentration of boron versus depth into the device. So 0 would be right at the surface by the channel. And there are three different curves here. The red one is for one micron. And you can see it peaks here at a certain depth. 1 micron channel length. The blue, the dashed blue, is a quarter micron channel length. And the dashed green is a 0.18, even smaller. And indeed, what the model is predicting, as I'm going to shorter and shorter channels, the amount of boron at the surface is going up here by a factor of almost 3. Something like that. 3 to 4. So indeed, if you increase the boron concentration at the surface of a MOSFET by a factor of 3 to 4, the VT is going to go up. It's not going to go down. And how can the channel length impact this? Well, it's directly through this mechanism. The normal Fickian diffusion, the simple version of Fick's law, there was no way people, when they use a simple models, they can get that to happen as a function of this channel. They had to invoke some other thing to take an original Gaussian-like boron profile, peaked right here, and then, in fact, move it in closer to the surface. In fact, if you just looked at these diffusion profiles and I didn't say anything about interstitial fluxes, I just gave you these three profiles and I said, OK, look, is that normal Gaussian Fickian diffusion? It can't be. I mean, for normal Fickian diffusion, what happens is as the profile gets broader, sure, it gets broader. Would you get more diffusion when it's driven by the flux of the dopant? But the peak doesn't change. The position of the peak in normal Fickian diffusion never changes. If you did your homework and you did the Gaussian diffusion, you found, sure, it goes down with time, but it's not like the peak shifts over to the left or the right. It's weird. If it's driven by its own concentration gradient, by definition, the peak doesn't shift. But here's an example of diffusion where the peak was actually shifting. The only way to explain that is some non-Fickian sort of phenomena. And in fact, people use this pairing, the fact that boron diffuses as a pair with interstitials, and the grades of interstitials was dragging the bond, and so much so that the peak of the barn was moving towards the surface. So we're going to talk about this in more detail when we do ion implant damage and how much interstitials are injected and all that. But it just gives you a-- I think it's a nice example of how the process engineers got together with a device people who couldn't figure out what the heck was going on and developed a process model that actually could explain pretty well the VT and predict what it should be. OK, so that's what I want to say now about this diffusion. I'm going to go on in slide 16 and talk about profile measurement techniques. I showed you some really old photographs from Plummer and Fahey in the late 80s. And I talked about beveling and staining. That's one way, but it was cheap and relatively simple, but not totally sophisticated. But let me give you some more examples. It's really critical-- you can see from the examples I've given you-- to have some method of measuring the dopant diffusion profile as it goes from the surface into depth in one dimension. And actually, if we're going to explain the reverse short channel effect like this, we really need a two-dimensional map of where all the dopants end up. That's not trivial. One dimension is pretty sophisticated. There are a number of methods. Probably one of the most sophisticated right now that you will use in your research or people use in the fab, or in semiconductor fabrication, is secondary ion mass spectrometry. I think we've talked a little bit about SIMS already. But that's probably the number one technique. And it continues to improve. It's improved dramatically over the last 10 years. As devices have shrunk, as junction depths have become narrower and narrower, they've found a way to make SIMS higher and higher resolution. SIMS only gives you the physical number of atoms of arsenic or boron per cubic centimeter. It's the physical chemical amount of atoms in the sample at any given point in depth. Sometimes you're interested in knowing not the number of arsenic atoms, but the number of electrons at that point per cubic centimeter. Those are different. Remember arsenic, you can put a lot of doping in, but not all of it may be electrically active. Or it may be compensated by the presence of boron or another dopant. So there are electrical techniques like spreading resistance. We'll talk about one-dimensional CV and differential vanderpol. These measure the carrier concentration. That's not the same as the dopant concentration. So 1D though, is reasonably sophisticated. Two dimension is still kind of tough. The methods are still being developed. They've certainly gotten beyond the junction staining methods. But they're generally indirect methods. They're more difficult. And this is an area with a lot of research, and development is taking place just to develop the metrology tools and techniques. Here's an example of some 2D techniques I'll show you. Cross-section transmission electron microscopy is a way of visualizing junctions with chemical etching. It's like the modern analog of the old Paul Fahey days, look in a microscope after you stain the junction. But it has resolutions that are a factor of 10 to 100 higher resolution. The scanning probe microscopy, I'll talk about. And the last one is inverse modeling. This is kind of a funny one, but it actually can be very useful in complicated cases. What people do is they take a device. They take all its current voltage characteristics, its capacitance voltage characteristics. They put them all in one giant database and think about it and try to figure out, given all those characteristics, what must be the doping profiles. So it's an inverse technique. Unfortunately it relies on knowing really detailed electrical models because you're really extracting this all from electrical measurements. So it's not the same as doing SIMS or whatever, but when you get to small dimensions and two dimensions, it is a technique people try to use. OK, so let's look at slide 17 and talk a little bit about SIMS. I've already mentioned that in the past, but now we're going to talk about it specifically for depth profiling of dopants. There are two modes for SIMS. There's what's called dynamic or static. Dynamic is what you typically use. Dynamic means as the ion beam comes in and hits the surface, it's sputtering away a significant amount. And it continues to go deeper and deeper into the surface as a function of time, at some constant sputter rate, say 5 angstroms per second. It sputters away the surface. And it looks at the atoms that come off it. They get ionized. It puts them in a mass spec and it tells you at any given depth what's coming off. Static SIMS is a little different. The energy and the angle are adjusted of the primary ion beam such that it doesn't actually sputter very much. It's primarily just taking off what's at the surface. So it's very gentle and low energy. That's just for looking at just what atoms or molecules are at the surface. But for depth profiling of dopants, typically using dynamic SIMS. How does it work for depth profiling? Well, it's somewhat intuitive, and you should take yourself through these three cartoons. I think you'll have a better understanding. First case, I have a sample of silicon. It has some dopant A up at the top. In some depth, there's a layer of B. At this interface, let's say there's a spike of X. It's a different dopant or a different contaminant, maybe carbon or oxygen. And down here it's doped with C, whatever these three elements are. So what happens? As I'm sputtering, I'm bringing in my initial ion beam, it starts to create a crater. And it constantly is cratering the sample. And you're looking off-- as a function of time, you're looking at what atoms come off in a spectrometer. So if you look at the intensity as a function of time, the spectrometer can scan several different masses. Well, lo and behold, it finds for this time period, while it's sputtering through this cap region, it just detects dopant A. When it gets to this interface right here, it's starting to sputter off not only a little bit of dopant A, but a little bit of B, and also X pops in. And then as you continue to crater down and you get into this point, you're sputtering off B. So you're linearly in time sputtering through the sample. And you collect as a function of time the intensity. So that's what you get out of SIMS. You don't get doping concentration versus depth. You get the intensity. How many ions are coming off per second? It counts per second on a detector as a function of sputter time. So you have to somehow convert the x-axis of time into depth. Well, that seems obvious on this plot. If you know the sputter rate, you figure out-- let's say you measure at the end. You put it in a deck tack and you measure how deep the hole is, the crater is. And you assume it's linear in time. Well, then I can figure out how many angstroms came off per second. And I can convert time to depth. And that's a big assumption there because what you're assuming is that the sputtering rate is constant throughout the entire experiment and measurement. If you have different materials-- maybe you're going through oxides, silicon oxide and silicon, sputter rates differ in those materials. So you run into some distortion of profiles. So you have to be very careful. But it's an assumption pretty much that people need to make. So you convert the time axis to depth. Intensity, that's even trickier because it's just intensity. It has to be calibrated to convert that to atoms per cubic centimeter. So I'll say a few words about how that conversion is done. But let me first show you just a-- I took this off the Charles Evans website, www.cea.com. Again, they're a large commercial company-- they're probably the largest in the world-- international company that for a business does materials analysis. And one of their big thing that they focus on is dynamic SIMS, primarily-- well, for a lot of industries, primarily for semiconductor and magnetic industries. And this shows you just is a little thing that they like to advertise just to give you an idea of the detection limit, so the number of parts per billion or the percent of an impurity you can measure. It tends to be related, to a certain extent, to the analytical spot size. So how big is the little spot that you're looking at? And there are lots of different acronyms. And I apologize for all this. If you want to go on to their website, they have each one of these acronyms is defined. In fact, we've talked about some of them. We talked about earlier in the course, we talked about TXRF, total XY fluorescence. It measures about a centimeter. It only measures at the surface. And it's good for measuring in this range, 10 to the 17th atoms per CC. The most sense-- as you go down to get to parts per trillion, the only technique that can get into that range, and it's only for certain elements, is dynamic SIMS. You see this little bubble is in this range. It has a spot size maybe 10 microns. Maybe closer to 100 microns is more typical these days. Maybe 100-micron spot size. But you can get elemental information down with dynamic SIMS down into the part per billion, and maybe even hundreds of parts per trillion kind of range. So it's what's used for profiling dopant atoms because remember, dopant atoms can exist at very low concentrations. 10 to the 14th. That could be a typical doping concentrations. And SIMS is the only thing that can measure that right now from a chemical point of view. So the primary ions that we typically use-- I'll just mention there are two primary ones or common ones. Oxygen is an ion coming in, has come in. People use oxygen because it enhances the secondary ions that come off. It enhances the production of positive. Oxygen tends to strip off electrons. So O2 plus. So it will enhance the ionization of the atoms coming off. Because remember, if the atoms come off as neutrals, you can't mass analyze them in a spectrometer. So you've got to get the atoms off the sample, and you have to hope that they come off ionized. And the way you increase the ionization yield, so to speak, is you use oxygen to increase the positive ion production. It's good for groups 1 through 3 if you're trying to measure-- and the transition metals in silicon. For instance, for boron, typically if you're profiling for boron, you use O2 plus beam. Cesium plus, on the other hand, is just the opposite. It enhances the negative ion production. So it's used for groups 4 through 7. And these are good electron acceptors, and they form negative secondary ions. So arsenic, you would typically use cesium primary beam. And they have machines set up for these two different-- so you use a different machine depending on what your profiling yield. Now, the ion yields, when I mean the ion yield-- so that's the number of ionized species that are coming off of some element, say arsenic, compared to the total amount of sputtering of the silicon, that yield. That depends on the matrix material. So if you have arsenic at the same concentration in oxide and in silicon at the identical concentration, the ionization yield will be different. So it will come off with different intensities. So that's called the matrix effect. So that's a bit of a problem. So the measured intensity has to be measured on a test sample that's calibrated where you know the concentration. And you compare that intensity on the same day to whatever's coming off of your unknown sample. And so it's always calibrated to a known sample. And that's one of the big drawbacks of SIMS. You need standards. It's not an absolute measurement technique. It's always relative. It's only as good as your standards. And in fact, on slide 21, I have an example of some actual SIMS data and how people quantify. The x-axis we talked about, quantifying is not too bad. You use a deck tack. You assume constant sputter rate. You cross your fingers. And that's how you get the depth scale. How do I get the y scale, which was originally intensity? I need to convert that to concentration. Well, typically what people do is they take a sample that was ion implanted. And again, we haven't talked about implantation. That's next lecture. Implantation is electrical means of very accurately controlling the integrated dose, the integral under the curve, of an element that you implant into a sample. And that accuracy of that control makes the implanter a very good way of generating SIM standards because the implanter can exactly control the total number of atoms per square centimeter that go into the sample. And in fact, this was a phosphorus implant at a total dose of 1 times 10 to the 14th phosphorus per square centimeter. So that's given as a known. So once we know that, since we know this area under the curve, we can then convert intensity coming off on that given day to a certain concentration of phosphorus. So every time we want to measure an unknown-- so the right-hand sample was unknown. The left-hand sample, the concentration was known because it was ion implanted. So you use a known dose. So from this integrated ion implant dose, I can generate a sensitivity factor that enables me to convert from intensity coming off to a phosphorus concentration on the unknown sample. And so this is what I get, for example, on an unknown sample. But you always need a standard which you trust in order to compare it to. And it has to be compared on that same day because the SIMS machines, their calibrations can be changing from day to day. And it has to be, in fact, a sample hopefully that has roughly the same total amount of peak phosphorus as in your unknown. If it was dramatically different, [? MIP ?] effects. And it should be in the same material. Notice this was phosphorus in silicon in a known sample. This is phosphorus in silicon in an unknown sample. You wouldn't want to use phosphorus and silicon dioxide to calibrate this because the matrix effect would kill you. OK, so that's an example of a practical idea of how the SIMS actually works. These are some considerations. People are very concerned about the depth resolution. What do I mean? Actually, let me go back for a minute for the depth resolution-- is let's say this actual sample is really a box. Let's say it's really a box-like profile. What SIMS does to it is it smears it out a little. The edges are not perfectly straight up and down. They have a little bit of exponential decay on the front and the back. How much of that is real? Is the phosphorus really decaying exponentially? Or is it really a box-like profile? What effects determine how well I can resolve the profile in depth? So these are some considerations. The depth resolution depends on the element you're profiling-- phosphorous or boron or whatever-- and the matrix that it's in, because after all, the impact conditions-- remember, these primary cesium ions are coming in. They're hitting the phosphorous or the arsenic, and they're imparting energy to it. So of course, they can knock that arsenic or phosphorus in a little deeper. And of course they could smear out a profile. So you try to change it so you can minimize the amount of ion damage or ion smearing. For sputter depth, the deeper you go into that and the deeper you make your crater, the more the bottom of the crater gets roughened just by the sputtering process as a random process. When you have a rough bottom to your crater, well, you're pulling atoms off from slightly different depths at that point in space. So the deeper you sputter, the worse the depth resolution for SIMS. So if you want to measure a sharp profile, try to put it near the surface, and you'll get much more box-like profile than putting it 1 or 2 microns in where the sputtering process has roughened the bottom of your crater. So what people have improved these days to try to get sputtering processes that are as smooth as possible by rotating the sample, and rastering the beam, and trying to make it really a nice, flat, perfectly shaped crater bottom. So they've done a lot of techniques. Also, these days they have ultra low-energy SIMS has been developed where the incoming beam has such a low energy, it doesn't perturb the profiles too much. And that's an important low-energy SIMS that's been developed in the last 10 years. OK, so that's chemical measurements. How about electrical? Well, remember I talked about beveling and staining? Well, you don't have to just bevel and stain. You can actually bevel, just like I talked about before. You create this surface that's beveled at a certain angle. And instead of staining, you take two little probes and you measure the resistance between those two probes. You flow a current and measure the voltage drop as a function of lateral distance along the bevel. As you move along the bevel laterally, of course you're also moving in depth. So you're essentially spreading out by the cosine theta. You're spreading out-- or the sine theta. You're spreading out that profile. So you can move laterally. And literally, it takes the probes on a machine, moves it laterally by certain steps. And you can convert this, then, to a resistivity, or resistance plot, as a function of depth. And you convert resistivity into carrier concentration. So this measures the doping concentration versus depth, not the dopant. Doping refers to the electrons or holes. So it's complementary to SIMS, which provides the chemical information. The depth resolution is not nearly as good. I mean, with spreading resistance, you're lucky to measure a junction that's 1,000 angstroms deep. Typically with SIMS, you can measure 100-angstrom deep junctions. And the big problem is, again, it needs some standards. You need to know how to convert how from resistivity to carrier concentration. So you need to know these curves. There's some famous curves on the Silicon website. This company Silicon does spreading resistance for commercial purposes. You can go onto there, and they show you their calibration curves. But if you have a material other than silicon, single crystal-- let's say you have polysilicon-- the relationship between carrier concentration N and the dopant concentration is not very well understood. Or if you have silicon germanium, there are no standards. So spreading resistance is OK, but it does have some limitations. I won't go through this in any great detail. This is shown on slide 25. I'm referring you to the-- I pulled this off the Silicon website. But in fact, even in the most perfect case, you can try to correct the data for artifacts by solving Poisson's equation to account for things like space charge layers where you have an N and a P region meeting and things like that. There's a lot of literature on this. But you can see it's measuring junctions that are pretty deep, microns deep, not hundreds of angstroms. But it can be used to measure the well profile and things like that. Slide 26 actually has kind of an interesting technique. It's not quantitative, but if you want to talk about 2D, it can work. This is a cross-section TEM. So you're taking a sample from a silicon MOSFET and you've cut it and cross-sectioned it. And you've made that thickness of that specimen only about 2,000 angstroms thick. There's special ways of cutting through and cross-sectioning it. And then you send electrons through and you look at their diffraction patterns. So you can actually image the gate, the gate oxide, the silicon. And interestingly, what people have done is you take the specimen after you've cross-sectioned it and you dump it in some acid. And it turns out the acid etches much more rapidly regions that are very heavily N-type. So when you etch that region, you change the thickness of the sample and you change the transparency of the sample to electron beams. So where it's very heavily doped, it appears very light. The electrons don't-- they go right through. So the contrast on this gives you good qualitative information. And people believe that it delineates this line, this dark line, corresponds to a concentration of about 10 to the 19th. So it gives you an idea of where the edge of the source and drain might be, but it's qualitative. And it's very time-consuming. There's another 2D technique, again, based on beveling. Again, we'll go back to the same old bevel idea. Here's a MOSFET with a source and drain. People actually take an atomic force microscope with AFM cantilever, little tip. And they measure where they are across the sample. And they measure at each point, a CV curve. So it's like very locally doing a capacitance voltage measurement where the tip corresponds to the metal point of the CV. And the backside contact corresponds to the back. And you're actually measuring-- you know from CV-- remember, we talked about CV on dots. If I put a dot over a uniformly doped sample, you can extract from the CV the local doping concentration. Well, they're doing it, but here on a very small scale with a very small AFM tip. So this is scanning capacitance microscopy. It's very tricky, though, the spatial resolution issues, limited by the probe size. The tip is only so small. And then it has fringing electric fields. So the actual area of the capacitor that you're creating with the tip is somewhat uncertain. It has to be modeled with sophisticated ENM modeling. But just to give you an idea, again, this is the website if you want to go there, if you're interested. Basically how it works on an N-type CV curve, basically it takes DC by DV, the derivative of the capacitor with respect to the voltage. And you can relate that locally to the carrier concentration. Very similar to what you do in 1D CV, but now over a surface. And in fact, here's a beveled junction where this region corresponds to phosphorus. This is the scanning capacitance microscopy image. So this is very high doped. This is the junction region in p-type silicon near the edge of a mask. So here's a mask. There's no phosphorus over here. It's lightly doped. So it's tricky. It's still under development, but it is kind of a popular topic for modern metrology. So let me just summarize about techniques for profile measurement. We talked about SIMS, the most popular. It does very good 1D profiles in depth, excellent as the best sensitivity of any technique to the dopant concentration. Excellent depth resolution. Methods are still under development trying to improve it. The very near surface region is troublesome, but there have been recent improvements. You have to watch out for matrix effects. If you're profiling a dopant in oxide or nitride or silicon, the ion yield varies dramatically, it has to be calibrated. Spreading resistance is only generally mostly one-dimensional, although people are trying to do it two-dimensional. But it measures carriers, so the active electrons and holes. Pretty good sensitivity. Depth resolution is not great. And it's hard to do shallow junctions. And you need to do some electromagnetic modeling to really understand it. These newer techniques are kind of exciting. These two-dimensional scanning capacitance and scanning resistance microscopy with using small probes are very interesting. They do rely on beveling, but there's a lot of advanced models for the process itself that are being developed in R&D today to try to come up with a better way to get quantitative two-dimensional dopant profile measurements. So summarizing on what we've talked about so far on dopant diffusion, we said that they diffuse by interacting with point defects, vacancies, and interstitials. The diffusivity is proportional to the concentration of those point defects. These point defect concentrations go up exponentially as I increase the temperature. And so the diffusivity goes up exponentially. They can also be changed by things other than temperature-- the local Fermi level-- the local doping concentration, that is. Ion implant damage, as we'll see in the next chapter, can change the point defect concentration. Surface processes like oxidation and nitrogen change it. All of these affect the effect of diffusivity. So the dopant diffusivity can vary in space. It can vary in time. All of that means that we cannot calculate accurate profiles in Silicon devices by hand. We're pretty much have to monitor all that by doing numerical solutions. There's been a lot of progress in the last 10 or 15 years on getting physically based models for dopant diffusion that will actually help you predict electrical behavior. These simulators-- and we'll talk about it more when we give a lecture on SUPREM IV. They allow you to fully couple the diffusion of the point defects. So you solve for the diffusion of the interstitials and vacancies, and you solve diffusion of the dopants at the same time. The problem with all these models is there's a lot of parameters. And so any parameters that you don't know, you can get beautiful profiles, but all the parameters need to be calibrated. So that's about all I have for today. And if you're handing in your homework 3, please bring it up front to this folder. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 20_Etching_Introduction.txt | JUDY HOYT: Just remind people where we are on the course schedule, we just finished up chapter 9 on deposition and epitaxy, and we're going to plow ahead, moving along to chapter 10, which we'll cover in the next couple of lectures. Chapter 10 is on etching, which is a very important capability for CMOS processing. And today is the 18th, and I guess homework number 5 is due today. So I brought a new folder here up front, this yellow folder, we'll use to hold your homework number 5. I'm going to have-- homework number 4's been graded, but I need to go through it. So I'll have that back to you next week. So let's take a look at today's handout, which is handout number 33, and we're starting chapter 10 on etching. We've talked a lot about, now, a lot of different processes. We spent the last three lectures talking about depositing thin films. After they're deposited, typically they have to be etched to make some kind of a circuit pattern or whatever you're trying to pattern. An important requirement of etching that we're going to talk about is that films need to be etched selectively. You want to etch one film, but you don't want to etch what's underneath it typically, or sometimes, you don't want to etch the mask. So selectivity is very important, and we'll talk about that. Besides etching thin films, sometimes we want to etch down into the substrate itself for dRAMs these days to-- so you don't use so much chip area. The capacitors, the storage capacitors, are actually sometimes in the trench process. They're actually etched deep trenches into the silicon. They make capacitors to store the memory-- to store the charge and the memory that way. So sometimes you need to etch silicon substrate. Dry etching, as it's called, or plasma etching, is really critical to CMOS fabrication. I'm going to emphasize dry etching. I only have two or three slides on wet etching. Wet etching is still used in research and development, but it's very rarely used in a manufacturing process, and we'll see a little bit why that is. So the next two lectures, what I want to try to do is give you a basic introduction to etching, mostly plasma etching, some specific examples. The next lecture, I'm going to have some specific emphasis on gate etching, polysilicon gate etching, since that's really very important to the CMOS process. And we'll talk a little bit, next lecture as well, on modeling, how we model etching processes. Whoops, I forgot to go to full screen mode here. Let me do that. So let's move on to slide number 2 in your handout. Basic introduction, this is what we talked about in the beginning of the course. Just to remind you, the planar process, what it consists of, usually, you'll have a deposited film here, which is shown in blue, and then to pattern it, you need to put down a photosensitive film, which is called photoresist, that can be put down at room temperature or spun on. You then expose it through a mask using ultraviolet light, or the right wavelength light. That either causes the resist to stay wherever it's struck by the light or to be removed or etched where it's struck by the light. Either way is a positive or negative resist, and you end up with, here, after developing the resist, you end up with an etch mask. So I have photoresist shown here in gray, and you're ready now to etch the blue thin film below it, and you do that either in a dry or a wet process. Then you remove the resist, and you're done. So now, you've transferred the pattern from the mask to the resist and from the resist to the film. So we won't talk about the lithographic process of transferring to the masks to the resist. That's a whole class, Hank Smith's course. What we're going to talk about here is how we etch this thin film. The most important things people care about are selectivity and directionality, and I'll talk a little bit more about what they mean. Selectivity comes from the chemistry. That is, how selective is the etchant to a particular material. The directionality, that is how the etches vertically versus horizontally, that usually comes from more physical processes, like energetic ions bombarding the wafer. And modern etching techniques have a little bit of directionality and a little bit of selectivity, both of those. Simulation tools are only more recently being developed. They are starting to play an important role, just like in deposition. Again, you will not find etching modeled in SUPREM-IV. There is no topographic modeling in that. You need a topography simulator like Speedy, and usually, topography simulators do both deposition and etching because they are really inverse processes. In one case, you have things coming down, reacting and sticking. In other case, you have things coming down, causing a removal reaction, and something coming off, an etching. But they're very much mathematically very similar processes. So slide number 3 shows a little bit about these characteristics, some of the important characteristics that I just talked about, and illustrate them. Up at the top, we have a situation where we start with photoresist or a mask. Let's say, that looks like these dark regions on top are the pattern or the mask, and we want to transfer that into this hatched region, which is the film we're trying to etch. After etching, shown on the upper right, what you have is, here, you see the mask, which is retained. Good fidelity, the mask was not attacked, so there's good selectivity. But we have what we call undercutting. You can see what happened is not only did we etch the film vertically, but we also etched laterally on each side, and so the feature size has been distorted. It's not exactly what the feature size was on the mask because of this undercutting. So that's the top two pictures illustrate undercutting. The second one here, labeled B, illustrates two things, both undercutting and poor selectivity. Up here, I had good selectivity. The resist was not attacked, and the layer below, whatever it might be, was not attacked by the etch. You noticed, it went down, and it stopped. Here, I have both undercutting and poor selectivity. Indeed, you can see what's happened. I've etched laterally left and right, so my holes end up being bigger than I had wanted them to be on the mask, and I've also attacked the layer underneath, so I don't have very good selectivity. The etchant continues to etch this layer-- it continues to etch the bottom layer as it's in the solution, or maybe in the plasma, and not only that, the etch also attacked the photoresist. And we caused it to get rounded, and that also can distort the feature size. Usually, what we want most of the time, although it's not always the case, most of the time, we want what we call anisotropic etching. We want the etching to go down perfectly straight. Generally maybe a slight angle, but we want to be able to control the angle in any case, almost vertical. And usually, we want highly selective, so we'd like to have what we call a selectivity on the order of 25 to 50. So that means the etch would etch this top film, the film we're trying to etch, 25 to 50 times faster than it etches the film underneath it. Both of these, anisotropy and selectivity, are usually desired. Unfortunately, it's very hard to get both, the best of both worlds, but there are systems that try to optimize this, and that's what we'll talk about. So as we move on the bottom of slide 3, as you move from the left panel A to B to C, what you're moving is from completely isotropic etching, as you can see by the little vectors here, that the film etches-- this particular etching is etching this material vertically at the same rate it is laterally. Here's a little more anisotropic. The vertical etch rate is faster than it is laterally, but still, there's some lateral etching, and this is completely anisotropic. There is no lateral etch rate, only vertical etching. So you're moving from completely isotropic to completely anisotropic-- I'm sorry, from left to right, so you're getting more and more directionality. On slide 4, I've listed some things that are fairly obvious to you, but I just wanted to make note of some of the practical requirements on etching. First of all, we want to get the right profile, whether we want sloped or vertical. That's obvious. We don't want much undercutting. Undercutting or etch bias is bad because the feature size on the mask is not maintained on the wafer, and that's not what the circuit designer designed. So that's not good, and you have lack of control. You want good selectivity to other exposed films and to resist. You don't want the etchant attacking something that you don't want it to attack. You'd like the etch to be uniform across the wafer, so as you etch a trench on the left of the wafer and the right of the wafer, the trench depth is exactly the same. This is an important one that people-- number five, people often forget. We don't want to damage the surface, and we don't want to damage the circuit electrically. And we are bombarding, in a lot of these plasma etches, we're bombarding, with hundreds of electron volts, the surface with ions. And they those ions can constitute a current to the wafer, and that current can actually damage sensitive electronic components. Obviously, we want the etch to be clean. We don't want to introduce a lot of impurities. We don't want to introduce a lot of metals into the wafer, and safe. So here's an example, on the bottom part of page four, what I mean by plasma damage. Let's just say you have a MOSFET. You have a field oxide FOx here on the left, another one on the right, and this thin region in the middle is meant to represent the gate oxide. And let's say-- and I have a polysilicon gate that looks like this. There's a large contact pad, which we often have perhaps for contact. And then it goes down, and it's a very narrow gate, which is what we were doing today, say 90 nanometer technology. Now, I go to put this in a plasma etcher. I'm etching something on the wafer, maybe not the gate, but something else. So I have all these arrows going down here. It's supposed to represent ions because I have a plasma current, so I have ions that come down, strike this-- they strike everywhere, but in particular, they strike this large area. Because this is a conductor, this current, with all these ions-- the flux of ions to this large area, constitutes a current. This current can then flow down the conductor, and where does it go? Well, if this is a thin oxide, it can tunnel through that oxide. So all of a sudden now, I could be forcing a very large current through a thin oxide, and you can end up with charges buried in that oxide and damage. And this is called an antenna because this thing can collect ions from a large area and funnel them down, acting like an antenna, and you can do a lot of damage. You have to be very careful. Yes? AUDIENCE: [INAUDIBLE] JUDY HOYT: Not necessarily. It depends on what I'm etching. This may not be the gate etched. For example, after you do a gate etch, you need to open up-- you may need to open up some other holes somewhere else. This could be happening during the gate etch. It could be happening during another step, but there are, nevertheless, ions coming in, in this particular example, into the gate. And so this may not actually be a gate etch step. It's just an example where you have the gate exposed, potentially collecting current. It can go through the oxide. So that's something we need to worry about when we have high density plasmas. Slide 5, so let's just talk about some of the basic concepts, and I'll go through the wet etching relatively rapidly. As I said, there's two types of etching-- dry, which means you use a plasma, and wet, which means you use a beaker or something in a solution. So you typically, for wet etching, you submerge the wafers in some specific bath. The good thing about wet etching, it tends to be highly selective. It's all based on chemistry. There's no physical components. You have no bombardment, so as a result, because it's based on chemistry, you can select the etchants in the solution that they will etch exactly what you want. It's usually isotropic. There are a couple of what we call crystallographically dependent etches. KOH, potassium hydroxide etching silicon is an example of that. There aren't a whole lot of cases of that, though. Most of the time, the etches end up being selective. Here's an example. Here's a wet etching of SiO2, but by HF. So you add HF to SiO2. You end up with some extra water, and the SiO2 goes away. You can etch silicon in a combination of an oxidizer, like nitric acid, and HF. Sometimes, you add acetic, so HF nitric acetic, but here's an example. Etching silicon wet etching with nitric NHF, so the nitric oxidizes the silicon. The HF strips the oxide, and you end up with overall etching of silicon. HF nitric, though, is not that easy to mask because it also etches any oxide on the wafer. So you have to watch out. You'd have to use nitride to mask it, but those are some examples of wet etches. Slide 6, let's talk a little bit about isotropic etching and undercut. If we're doing isotropic etching, as I'm showing here in panel A at the top, one is supposed to be my mask. Layer 1 is the mask. Material 2 is what I'm etching. It's isotropic, so it's etching vertically and laterally at the same time. I'm going to have some undercutting. And in fact, we can sort of mathematically express the amount of anisotropy, how anisotropic is the etch, by defining this number here called a sub f. If you define this anisotropy factor, a sub f, to be 1 minus the lateral dimension that you go, the lateral in a given time, etched amount divided by the vertical. So in my diagram here, this would be 1 minus b over a, so I etch laterally by distance b-- I'm sorry, b over d. Laterally, by distance b and vertically by a distance d, and so that would be the anisotropy factor. So this anisotropy factor is 0 if you're isotropic etching because then b equals d where r lat equals our vertical, and then af goes to 0. If you're totally anisotropic, this thing basically-- b goes to 0 because you don't etch laterally at all, and this number goes to 1. Now, typically, if you're etching a film, what's a practical thing you need to think about? Well, you need to know the etch rate. That's fine. Let's say you know the etch rate, and I give you the film things, you can calculate how long you need to etch. You just divide the thickness by the etch rate, and that gives you the time. But that's not the time you should use for the etch in practice. In practice, you almost always do something called overetching. So you typically leave it in the bath longer than what you would need to etch the film thickness. Why is that? Well, basically, you need to ensure that you actually do completely remove the film because the film-- if I tell you the film is 1 micron, first of all, you shouldn't believe me. I may not know what I'm talking about. It may not be a micron. And even if I do know what I'm talking about, films in reality are never exactly one thickness across the wafer. There's always variation. So you may have a slight high spot in the center or a high spot at the etch. Who knows? So you almost always do overetching, and that's what we're picturing in the upper right. What we've done here is, in showing you the contour of what the etched region looks like for different times, so this is after a certain amount of time. This other dashed line is a slightly longer time, and now you've already broken through. You've hit that, but maybe somewhere else on the wafer, you haven't quite etched the film. So you go a little bit longer, and you end up with the profile shown in the solid line here. So that's because we have done what we call overetching, and overetching has to be done in any practical process, either in research or in manufacturing, to take into account variations in not only the film thickness, but variations in the etch rate. The etch rate is not exactly uniform necessarily, depending on the etcher or even depending on the solution. For wet etching, the selectivity is usually excellent. I mean, we usually choose our etchants so that we get good selectivity, and again, selectivity is defined as the ratio of the etch rate of the film we want to etch divided by the etch rate of the film we don't want to etch. And chemical reactions tend to be very, very selective. So just to illustrate this idea of overetch and selectivity, I wanted to go through a quick example that I took from your text, and that's what's shown on slide 7 and 8. Here's an example of a question that you can consider. You have a relatively simple setup. You have a half micron layer of silicon dioxide on a silicon substrate, and you need to etch it down to the silicon. Now, you assumed that the nominal etch rate-- so you're putting this in some etcher, whatever it is. Let's say it's not HF, but it's some other type of etch. Has a nominal etch rate of r sub Ox, so that's a certain number of microns per minute that you've been told that the oxide etch is in this etchant. But there is about a plus or minus 5% variation in the oxide thickness. The oxide's not perfect. It was grown in a tool that gives you a 5% variation across the wafer. There's also a plus or minus 5% variation in the oxide etch rate. You're putting it in a solution, or let's say you're using a plasma etcher, it's not perfect, so it varies plus or minus 5% across the surface of the wafer. So the first thing we want to figure out is how much of an overetch is required, and we usually represent overetch as a percent of the total time. So if I'm going to etch for a minute, to get rid of the film, and then leave it in an extra minute to make sure that I've taken care of non-uniformities, That's called 100% overetch because it's 100% of the main etch time. I'm contributing 100% of that to the overetch step. So we want to know how much overetch is required as a percent in order to make sure that all the oxide on the wafer that you etched all the way down at every point in the wafer. So let's just go through part a first and how we would think about that. Well, I've just drawn a little cartoon down here. Basically, this green material is supposed to represent the oxide. It's nominally half micron on the wafer, but at the thickest point, it's thicker than that. It's 5% thicker, so it's 0.525 microns. So basically, what we want to do is, the overetch has to be done to make sure that you remove the oxide at the thickest point for the slowest etch rate, so you take a worst case. So I take the thickest point is 0.525, and then the slowest etch rate is 0.95 times the nominal etch rate. It's 5% slower than the nominal etch rate. So the time to etch this, worst case, the thickest region, assuming you had the slowest etch rate, is just this 0.525 divided by the slowest etch rate, so divided by rOx times 0.95. So the worst case etch time would be 0.553 divided by rOx minutes. So now, again, we express the overetch as a percent in time. So that's just going to be the worst case etch time, which we just calculated, divided by the nominal etch time. The nominal etch time would be the time to remove half micron at the nominal etch rate. So the nominal etch time is given by 0.5 divided by rOx. So we just take the ratio of the worst case etch, the longest etch, to the nominal etch, and we get a 1.1, so that's about a 10% overetch, which isn't bad. 10%-- to do a 10% overetch these days is pretty darn good. Most etch-- most films have a greater variation than plus or minus 5% in a lot of plasma tools. Plus or minus 5% variation is conservative, so it's not unusual to see overetches of that length. Now, the second thing-- basically, I'm leaving that wafer in the plasma etcher or in the solution 10% longer than I need to, to remove the nominal film. The second question is, now we know that, what selectivity of the oxide etch rate to the silicon etch rate is required so that, at most, we remove 5 nanometers of silicon? Assuming we do the overetch that we calculate, assuming we do a 10% overetch in part a, so what we're asking here is this. I'm going to be etching down, and in some regions where the oxide is thinner, and because I have an overetch, I'm going to actually be exposing the silicon surface to the etchant. And it has a finite etch rate, but I don't want to remove-- I want to make sure I don't go into the silicon by any more than 5 nanometers. So that sets a requirement on my selectivity. So the selectivity is how etch how fast am I etching through the oxide divided by how fast I'm etching through the silicon. If that selectivity is too low, if it's 1 to 1, then during certain regions during the overetch, you'll be zipping through that silicon substrate at the same rate that you're etching here. So obviously-- and this is exactly the kind of practical calculation you'll need to know. If you have to say, oh, I don't want to attack-- I don't want to eat away the source drains when I'm etching the contact holes. Well, how much selectivity do I need if I'm only allowed to eat away, say, 5 nanometers, something like that, given a 10% overetch? So if we go to slide 8, uh-oh, we can think through that here in solution b. So what's the most amount of silicon you would remove? The worst case will occur under the thinnest oxide being etched at the fastest rate. That's the worst case, so I have the thinnest oxide on the wafer, and let's say, at that point, it happens to be etching the oxide at the very fastest rate. That gives you the most amount-- the quickest that you'll clear the oxide. At that very point in time then, the etchant starts attacking the silicon. So the thinnest oxide is 0.95 times half micron, so it's 0.475 microns. The fastest etch rate is 5% faster than the nominal rOx, so that's 1.05 times our rOx. So the time to etch through that oxide is 0.475, which is the thickness, divided by the fastest etch rate. So it's just this number here, 0.4523 divided by rOx. So after that amount of minutes, all of a sudden, in that one part of the wafer, the silicon is exposed. So now, the time that the silicone is exposed to the etch is equal to the total time, which is this-- what we calculated before, 0.553 over rOx in part a, minus the time it took to etch the thinnest oxide. So that's the total amount of time then that the silicon itself would receive the etch. Assume it's etching at the fastest rate. So the time the silicon is exposed is just what we had from part a, the total etch time, 0.553 over rOx minus the time it took us to get the silicon opened up in the worst case. So we end up with a silicon exposure time of 0.1 divided by rOx in minutes. That's how long the silicone, in the worst case, would be exposed. So now, we know what we have to calculate. We want to etch only 5 nanometers, which is 0.005 microns of silicon. That gives me an idea of what the etch rate of silicon, the maximum, can be. It's going to be that 0.005 divided by the time it's exposed, which we just calculated. So the maximum etch rate of silicon ends up-- do the math, ends up to be 0.05 times rOx microns per minute, so this is always in terms of rOx. And so now I can calculate-- given this, I can calculate the selectivity of oxide etch rate to silicon, just like we did before. So I just take this rate rOx divided by this etch rate of the silicon and divide it out, and you end up with a rate-- the rOx's go out, you end up with this ratio has to be 20 to 1 or better. Yeah? AUDIENCE: [INAUDIBLE] JUDY HOYT: Which one? AUDIENCE: [INAUDIBLE] JUDY HOYT: Oh, yeah, the time silicon is exposed? All right, well, that's fine. You can do the math to kind of get the idea. But what this says is, any time you need to do an etch, you typically have a maximum requirement on the film below that you can remove, and then you have a certain overetch requirement, which is set by non-uniformities and other things that can be set by topography. That determines what kind of selectivity you need, so knowing what kind of selectivity you need, you can go out and figure out what etch it is you need to find in the plasma etcher. So typical selectivity requirements, we'll talk about this later next time, but for gate etch, typically you need 50 to 1, so something like that. 50 times faster etching the polysilicon compared to etching the oxide, and it can even be higher than that depending on topography and other requirements. That's a simple example. It just gives you an idea of how one thinks through these types of problems. So one problem that we haven't talked about too much here, but shown on page 9, slide 9, is something called mask erosion. The fact that, during the etch, the mask itself may get etched, not just the layer underneath. So the mask can be eroded during the etch, and this can happen in both isotropic and anisotropic etching. And so in this example, a, you can see that the mask was eroded by a certain width or etched off delta m here, and that can happen if you're doing isotropic etching or anisotropic etching. Again, mask erosion is bad because the feature size that you get ends up being different from what you would wanted on the mask, and it can even end up affecting the shape sometimes of the feature that you get. So mask erosion is something we want to avoid. If you want to know about chemical etchants, you can look in table 10.1 in your text, and there are some common ones listed. There are websites you can go to for common etches, but again, as I mentioned, because they're isotropic, what chemical etches are very rarely used in mainstream IC, maybe for manufacturing some MEMS devices or some very specialized devices, but for mainstream CMOS, you don't typically find-- The only wet etch that you typically do find is HF, but that's not used for pattern transfer. HF is usually used as a dip during the RCA clean to remove a very thin layer of oxide. But that's not patterning. That's just cleaning off the surface. So we're going to spend most of the discussion of chapter 10 then on this plasma etching, or this gas etching. It's fast and simple in most cases, and the real benefit is it's more directional. You can get anisotropic etching. Plasma etching can have both chemical effects, so very highly reactive species can be created in the plasma, and it can also have ionic effects, which would be directional so a sputtering action as you accelerate ions towards the surface. So both of these can play a role, and depending on their ratio of the chemical effect to the physical effect, you'll either get less or more anisotropy. Here's an example of a plasma system. This should remind you very much of the plasma enhanced CBD. The only difference is we're not depositing. We're actually removing material. Here's a typical RF powered system. Here, you have a gas coming in, and you have an etchant gas, for example, could be CF4 or O2 or some combination coming in, creates ions in the plasma that are highly reactive that can go down and end up etching off species from the wafer surface. Here's just an example shown on slide 11. Again, this should be very much analogous to what you read about in chapter 9 on plasma enhanced deposition. Again, we have-- when we have a plasma, the plasma tends to self biased because of the differences between the mobility of ions and electrons. It tends to be self biased to be-- the potential to plasma tends to be positive. And so also, the smaller electrode tends to have a higher voltage drop near it to maintain current continuity, so the trick in the plasma is you put the-- if you're doing etching, you put the wafer here, the target here, to be the smaller electrode. There's a large voltage drop then. On the electrode towards the left here. This voltage drop will accelerate the ions towards it, and you can get etching. If you're doing sputter deposition, you do just the opposite. You put the wafer over here on the large electrode. You sputter away the aluminum target, and it goes to the wafer, but I'm trying-- again, now, I'm trying to etch material on the wafer, so you put the wafer on the smaller electrode during RF plasma etching. Slide 12 shows some typical reactions and species that might be present in a plasma. For plasma etching, a particular example, assuming we're flowing CF4 which is a very common species used to do etching. Here, there are different types of processes. You can get ionization, dissociation. So the CF4 can be banging around into electrons, and you can dissociate it into a free radical, CF3, and free fluorine, both of which can be highly reactive. And then this, you can have ionization of those species to create CF3 ions or fluorine ions, so you have all these different both neutral and ionized and very reactive species present. Typically, we have something like 10 to the 15th per cubic centimeter of neutrals. 1% to 10% of those neutrals may be free radicals. Remember, free radical is highly reactive. It's got an unsatisfied bond, and so it's a highly reactive species. It's very important for etching. And then you have a certain number of, say, 10 of the 12 or so per cubic centimeter ions and electrons. This isn't a standard parallel plate plasma system. In a standard system like this, just two parallel plates with an RF voltage, the plasma density, so the number of these reactive species per unit volume, is usually determined by the voltage drop, the sheath voltage, so to speak. So increasing the power of the plasma increases the plasma density. So you more species and a higher etching, but you also get a higher voltage drop between the plasma and the wafer, so you get more damage. So there's sort of-- in this type of reactor, you don't have as many knobs as you might like, so there's a trade off here. You can etch faster by tweaking up the RF power, but you have to be careful because you're also increasing-- tendency to increase the ion energy. And increasing ion energy may or may not be desirable from a damage point of view. So I wanted to go on slide 13 and talk about the different plasma etching mechanisms, and there are basically three. There is chemical etching, which just like in wet etches, there are chemicals or species present, which do isotropic etching. They etch the same in both directions. These would tend to be-- typically not ionized, not ions. But they give very good selectivity because they have chemical specificity. There is physical etching, which is anisotropic. It tends to be ions that are accelerated down towards the wafer, and so they have a directionality associated with them, but less selective. If you're just banging down on the wafer, it doesn't matter what material you bang on. You're going to remove some of it. Ion enhanced etching is sort of a combination that we'll talk about with our ionic species that are present that also have chemical action. So these are the keys. This is where we get to reactive ion etching. You can get both anisotropy and selectivity to a certain extent at the same time. Most etches today use reactive ion etching or ion enhanced etching, but let's go through these three different mechanisms. The first is chemical. As you might imagine, it's typically etching by neutral species, free radicals, so for example, you're creating this extra CF4, which is very reactive, and CF3, which is very reactive, and fluorine, which can then-- this free fluorine can come down, hit the surf-- or come to the surface, react with the silicon at the surface, and create C4, which is volatile. You can add things to this to etch silicon a little bit more efficiently. You can add oxygen, O2, and that helps react with the CF3 and reduces the recombination of CF3 plus fluorine to go back to CF4. So that helps-- adding oxygen tends to push this more to the right, so you get a higher etch rate. So a very common chemical etch that is fairly isotropic is to flow CF4 plus O2. That etches silicon in a reasonably isotropic manner because it doesn't depend much on ions. It depends just on this surface chemistry. So as I indicate on slide 14, these processes, these chemical equations, are really just chemical. They are isotropic and selective like wet etching. So here's an example of, you have some species being created in the plasma. They are transferred to the surface. They adsorb on the surface. We have this free radical. They react with the surface or the film, and then they come off as a byproduct. Just like we talked about in deposition, we usually characterize the arrival angle distribution of these species coming down the etchants. Usually, there's some kind of cosine theta to the end if it's isotropic, n equals 1, and a certain sticking coefficient usually very low. The surface reaction takes some time, and you can get some desorption. But you can use this type of picture to set up mathematical models of the process. So that's chemical etching, and again, you're just creating free radicals. It's not that much different, in some ways, from putting the etchant in a beaker of wet chemical. On slide 15, I talk about physical etching, so what is physical etching? Well, ion etching, for example, on the right, you could have, in the plasma, there are ionic species. The plasma-- there's a voltage drop between the plasma and the surface, so these ions can be accelerated towards the surface. So it tends to be more directional. There's an electric field or a voltage drop, as we mentioned, across the plasma sheath right near the surface. We often model this as a sticking coefficient close to 1. The ions don't go [INAUDIBLE]. They just come in. You assume they come in. They react, or they come in, and they knock off whatever's on the surface. Typical, in a physical etching mechanism, you might have species like CF4 plus. Again, it's not really chemical. It's just coming in as a large massive molecule comes in and can knock something off, or argon itself you can remove by sputtering. So it's not-- it tends to be not very selective. If you're doing purely physical mechanisms, it tends to remove the photoresist or erode the photoresist. It's hard to get selectivity because most sputter rates of all the elements are about the same. They're not that much different. They can vary by a factor of 2 to 4, but we just did an example where we needed a selectivity requirement of 20 to 1. And other etches, as I mentioned, need 50 or 100 to 1, so purely physical etching is not going to give you that. But it is very directional. It is very anisotropic. The other problem with it is, of course, you can damage the surface because you're putting very energetic species in ion implanting them into the near surface region. So the third type-- so we did we did the chemical etch mechanisms, the physical mechanisms. The third type is this sort of ion enhanced etching, and it's the most confusing one, but it's also the most important. It's been observed that, a lot of times, that the chemical and the physical components of plasma etching are not independent, that they interact with each other in terms of the etch rate and the resulting etch profile. And there are a couple of-- many examples of this. I'm just showing one here. This is a particular example of etching silicon, and this is a plot of the etch rate in silicon on the vertical axis in nanometers per minute as a function of time. And there are different gases flowed into the plasma etcher. In the beginning, you're flowing xenon difluoride gas only. You do have a silicon etch rate, but it's slow. Now, you add an argon ion beam, so you're adding some physical component. You're adding some argon in there, and all of a sudden, the etch rate goes up by a factor of 5 or 10, something like that. So you get a large increase in etch rate when you have both an argon plasma ion beam plus the xenon difluoride. And then you turn the argon off-- or you turn the xenon difluoride off, and you have argon only. And you get a little bit sputtering, but not much. So really, you only get fast etching, in this particular case, when you have both the ion beam and the xenon difluoride present at the same time. So there's a synergistic effect going on, and this is not uncommon. It's very typical. We'll talk about some mechanisms. The nice thing about this reactive ion etching, or as people call it, is the etch profiles can be very anisotropic, and you can also get selectivity. So you can try-- you can get the best of both worlds. How does it work? Well, there's a lot of different mechanisms. I don't think people know a lot of the details, but there are-- people try to model it. But some examples of this synergistic mechanisms are shown below, for example, here on page 17 on the left. So you have some reactive neutral species that's in there. You have the ionic species at the same time. Well, what might be happening is there could be a chemical etch that's going on at the surface based on, say, free fluorine, or whatever it happens to be. So there's a chemical etch going on, but the etch rate that the chemical equation, the chemical-- the reaction may depend on the bombardment by ionic species. So there's sort of a synergy going here. In the presence of this bombardment, the etch rate goes up. On the right, this is a little bit easier to understand. You might imagine that, during this etching, very often what happens is inhibitor layers form. So you'll have reactions that take place on the surface, but you have a lot of carbon, the CF4. A lot of the etchings have carbon in them. They tend to form some kind of polymer that sits-- a monolayer or two of polymers that sit on the surface. These polymers are there, but they inhibit the next reaction from taking place. In some ways, they inhibit the etch rate locally wherever they exist. But now, if you have ion species around, they can come in and bash up, they can bombard the polymers, and remove them only where they hit right at the bottom of the trench. The inhibitor layer can still be left on the sidewall. So here's an example where you have both chemical etching going on. You have the formation of inhibitor layer, but only where the ions come down and hit do you remove the inhibitor, and then you get etching only at the bottom of the trench. So here's an example of reactive ion etching. The presence of these little byproducts, which you might think are, oh, these are just parasitics to the whole process. They're the name of the game when it comes to doing reactive ion etching and getting a very anisotropic profile. You want inhibitor layers to form on the sidewall because you don't want any sideways etching, but you don't want them to form on the bottom. Well, they'll still form on the bottom, but if you have ionic species coming down, you can remove them. So this is how people do reactive ion etching and get such both selectivity, and the reason you get selectivity, because there's still a chemical reaction going on here. So it's not just etching-- it's not just sputtering. It's a chemical reaction and a little bit of removal of the inhibitor together that count. So as you see on slide 18, this ion enhanced etching or reactive ion etching is hopefully the best of both worlds. You get good selectivity, and you can get good directionality. So although we may not know the exact mechanism, I kind of prefer, in my own mind, to have this inhibitor layer removal mechanism. The two components, that is the physical bombardment part and the chemical reaction etch part, they act in series. So you get this anisotropic etching with very little lateral undercutting because of the directed ion flux. So RIE is one of the most common forms of etching that you will see today because you can get, to a certain extent, the best of both worlds. Slide 19, talk a little bit about how to control the shape or what are some of the issues. Even with ion enhanced etching and this chemical etching, the slope of the resulting sidewall is not always perfectly vertical. There's some tricks. It needs to be adjusted empirically. The ion flux may not always be perfectly normal to the surface. The ion flux may be coming at a slight angle. You could have bowing of-- that can cause bowing of the side walls of whatever you're trying to etch. You can also get sloped sidewalls when the inhibitors, formed during the etch, have a high deposition rate relative to the etch rate of the inhibitor and the substrate. That's a little tricky, but on the next slide, I'll show you an example of how we get sloped sidewalls. So you need if you change the relative inhibitor deposition and etch rates and substrate etch rate by changing the chemistry or changing the specifics of the plasma conditions, you can control the slope of that sidewall. So I just want to show, on slide 20, an example of how that works, and we kind of walk through these. There are two cases. One case on the left, I'm assuming-- we're assuming that the inhibitor deposition rate is very fast compared to the etch rate. On the right, the inhibitor deposition rate is relatively slow compared to the etch rate of the film, so let's see what happens in both cases. If we start on the left, here's my mask. Here's the film I'm trying to etch. Now, just don't get confused here. The inhibitor deposition process and the etching process are happening simultaneously. I'm just showing them in separate views, separate steps, to help us think through it. So let's just say, in the first time step, the first couple of seconds, the inhibitor deposits everywhere. So you have this inhibitor formation. Now, in the next time step, the inhibitor is removed, and you etch. So here, the inhibitor DEP rate is fast compared to the etch, so I don't etch very much of the silicon. Just a little bit, whereas on the right, I etched quite a bit because this etch rate of the silicon is much faster than it is in this case. Now, I form, again, the next step, a little bit of inhibitor deposition here on both the left and the right. Here, the inhibitor DEP rate is pretty fast, so I have a thick inhibitor. Now, during the etch part, I etch just a little ways, whereas here, look, I etched a deep ways. And you continue on doing that, and you can see that, in the left case, you're going to end up with a more sloped sidewall because the inhibitor is going down much faster than you are etching into the silicon. So you end up with this-- there's a certain amount of lateral etching that takes place, and so you end up with this kind of sloped sidewall. , Whereas in this case, where you've adjusted the inhibitor DEP rate, it's relatively slow compared to the etch rate, or the etch rate is relatively fast if you want to think of it that way. You end up with a much more vertical profile. So the angle of the vertical profile depends on this ratio of how quickly the inhibitor forms the polymer, and how quickly it ends up being etched away, and the etch rate of the silicon, or whatever you're trying to etch at the same time. So very small changes. Usually, you do this by flowing multiple different types of gas and things. Very small changes in those gas flows can then change the angle which you achieve, and obviously, people try to tune this to get the right angle for their particular application. So that's just an example of how RIE works and how you can control that angle, and that's a pretty critical thing to control these days. So slide 21, now, I want to go through a couple of different examples of the types of etching systems, and then we'll get back more into how we can do the modeling. Over the years, a lot of different configurations have been developed, some of which make use primarily of chemical mechanisms, some physical, and then a lot of them, primarily, these days, are this ion enhanced or RIE mechanism. The most old fashioned type you will find in pretty much any thin film lab is an old fashioned what they call barrel etcher. It's called a barrel etcher because it's shaped like a barrel, a cylinder. And you have here one of the outer circumference of the etcher is one of the electrodes, and then you have some kind of shield on the center as well. And you have an RF bias then here between one side of the barrel and the other, and what you end up with, this is purely because of the geometry and the chemistry of what you're using, purely chemical etching. So you're primarily creating the free radicals that we talked about earlier that participate in these chemical reactions, and barrel electrodes are used today a lot in fabs, but not for anything critical. They're usually photoresist strippers, so a common way to strip photoresist off of wafers is put them in a barrel etcher and put an oxygen plasma. Photoresist reacts as an organic. It reacts very quickly with the oxygen, and this process strips the resist without attacking much else on the wafer. And this is called ashing, just commonplace. So nothing-- the shape and design are not real critical. It's just a way to strip off certain species such as a photoresist. The next level of complexity in a fab that you might find is shown on page 22, a system called the parallel plate system operated in the plasma mode as opposed to an RIE mode. So here's a plasma mode parallel plate system. We have two parallel plates that form the electrodes, one with an RF power input, and then there's another electrode on which the wafers sit. And you put in some gases like CF4 and O2. They have roughly equal areas, or maybe the wafer, is grounded in the chamber. It might be slightly larger. Here the sheath voltage may only be 10 to 100 volts, relatively low, so the energy in which these things come down is relatively moderate, say 10 to 100 electron volts. So there's not a whole lot of ionic component. You're not getting very high acceleration of the ions. It's primarily chemical, so in a parallel plate system in the plasma mode is pretty much still chemical. The etching tends to be fairly isotropic. It gets us down and sideways about the same, and it tends to be quite selective. Again, it's mostly chemical type of etching, so that's a parallel plate system in a plasma mode. Now, there's a second way to operate the parallel plate system shown here on slide 23, parallel plate in the RIE mode, Reactive Ion Etching. In order to do this, we need to get more directed etching. We need to have a stronger ion bombardment. We need to have a higher voltage drop between the plasma and the wafers. So here, you make the wafers electrode much smaller than the outer electrode, than the other electrode. So the wafers sit on the smaller electrode. Here, the voltage drop across the sheet, instead of being 10 electron volts, is like 100 to 700 100 to 500 electron volts, so 10 times higher at least, maybe more, 20 times higher. So you have much greater energy of the incoming ions. You also have lower pressures to get more directional etching, maybe 10 millitorr. Lower pressures means, what, a longer mean free path, less randomization, so the ions again get accelerated towards the wafer, very directional. So this tends to be a more physical component than for the plasma mode. There's better directionality. A little bit less selectivity, however, for the RIE mode, but you can-- this is what you trade off. You trade off directionality for selectivity to a certain extent in these systems. Damage is going to be worse. A little bit more damage because you have higher energy ions. Oh, on slide 24 is just an example of a photograph. It's a very old fashioned, way back 20 years ago, but still found in some labs is the Applied Materials 8100 Dry Etcher. This gives you anisotropic etching, this particular one, of films like silicon nitride, silicon dioxide, and polymer layers. You can relatively control the oxide slope, the slope of a contact hole, slope angle, by using sidewall polymer deposition, just we talked about in that example of the inhibitor layer. The inhibitor layers are typically polymer, so sometimes people will call them sidewall polymers. The chemistry is fluorine based in this particular unit, so you would use things like CF3, oxygen, SF6, anything with fluorine in it, creating very reactive fluorine species. The inside electrode where the wafers sit is a hexag-- hexagonally shaped. The outside barrel, which is shown here, is the outer electrode. So the wafers sit on the inside electrode, which is smaller, and so you operate in the RIE mode. There's a fairly large voltage drop across the sheath near the wafer surface. So that's kind of old fashioned these days, although you'll see them in a lot of university labs, and even some fabs. There are RIE etchers here at MIT in the clean room here. The most sophisticated type of system today is a little different, though. It's shown here on page 25. It's called the High Density Plasma etch system, or an HDP, and what this does is it uses a remote non capacitively coupled plasma source, such as a microwave electron cyclotron resonance or inductively coupled plasma. So the mechanism by which you create the ions, that mechanism is separate from the mechanism by which you bring them to the wafer. So you have different voltages. Before, if you need-- in the old fashioned here RIE type, basically, if you want to get a higher plasma density and a higher etch rate, you drive up the voltage here, but that also means you have a bigger voltage drop to the wafer. So the plasma density and the damage to the wafer-- or the voltage drop here are directly related. You cannot decouple them very effectively. In HDP, High Density Plasma, these things are somewhat decoupled, so you have a separate RF source here that biases the wafer. So this separates the plasma power or the density of the plasma from the wafer bias, the accelerating field. And so now, you can get reasonably high etch rates, but not to completely damage the wafer surface. So on some of the characteristics are listed here on slide 26. You can get very high density plasmas, maybe 100 times higher density than you can achieve. Otherwise, you get faster etching. The pressures tend to be lower even in the 1 millitorr range, 1 to 10. To get higher ionization efficiency, longer mean free paths, more anisotropic, more directional etching, so things do come down to the surface perpendicular. The nice thing is, about these systems, you get a high etch rate, pretty good selectivity, good directionality, so very vertical side walls. At the same time, trying to keep the iron energy and the damage low because you have a separate power supply to create the plasma from the power supply that accelerates everything towards the wafer. So if you need to etch deep trenches, this type of thing, high density plasma is perfect. If you are doing optoelectronics, and you want to make waveguides, high density plasma where you need to etch through microns of material extremely vertical, perfectly vertical sidewalls, high density plasmas are perfect for that. Very high etch rate. Good control of the directionality, and you don't damage the surface too much. Here's an example, actually, slide 10-7, of an HDP system. It's fairly old one now. This was back in the mid 90s, this was popular, made by a company called Lam in California on the West Coast. It was called the TCP 9400. It's, again, a little bit obsolete, but it was developed for high density plasma etching of polysilicon gates, specifically to etch the gate electrode and to stop on silicon. So it was designed to have very, very-- give you very vertical side walls for gate etching, but not etch too much of the gate oxide underneath, so you get pretty good selectivity. So this etcher tends to be dedicated. In many fabs, they have a dedicated etcher just for gate etching and not for anything else. So that's the RIE-- example of RIE etchers and high density plasma. They're all over the place in fabs. There is one other type of etching, not that common, but people do it, shown here on slide 28, and that's sputter etching or even ion milling. And this is purely physical. It's highly directional. It doesn't have much selectivity. As I mentioned, the sputter rates of the elements don't vary very much. There's not a lot of chemistry going on. The nice thing about it, though, is you can etch almost anything. For certain materials, it's actually hard to remove them because they don't always have the right gas. But if you need to get the film off somehow, if you use a physical mechanism, you can sputter it off. Sputter etching almost always uses a heavy molecule or a heavy species like argon, an argon ion. The bad thing about it, of course, it damages the wafer surface. It can damage devices. It can also do things like produce trenching, which is a non-desirable sort of effect. Here in panel a, I show what trenching is. So this could be, here, let's say you're masking this, and you're trying to etch down here through this layer. Well, because, if you're using sputtering right near an edge, you can get the ions kind of bouncing off near this edge, and they can then come at a certain angle where they actually produce a little trench right near the edge of a feature. So you get enhanced etching right near the edge of a feature. You don't necessarily want that. You'd like to have the etch be straight across, so trenching is common. Ion bombardment damage, you can get redeposition, so this can come in here, sputter off this film, and redeposit those atoms down here, where you don't necessarily-- you may not want them. There can be charging that happens if you have an insulator up here. And then as these ions come in, that can distort the ion path, and so your trenches can bow out, for example. So the reason we bring up these sputter etching is not so-- and ion milling, not because people use them that much, the pure sputter etcher, but in any etcher system where they have a very strong physical component, a certain amount of this sputtering will take place. And if you're not careful, you can end up with trenching and things like that. So this, if you turn up the physical knob too much on your etch, you can start having things like this, which are indicative of that you're having a little bit too much sputter etching going on. So slide 29 is kind of a bit of a summary of the different plasma etching types of systems. Here at the top are the most physical processes listed. At the bottom are purely chemical processes, so we go from sputter etching and ion milling to-- which is completely physical, to high density plasma, which is physical plus chemical. RIE, a little more chemical. Plasma etching, totally chemical. Wet etching is totally chemical. There's no physical process-- no bombardment inside the beaker. So if we go from here to here, we think about it, the pressure is going down if I'm going in this direction. Anisotropy, so more anisotropic, it goes in this way. So wet etching is total isotropic. This sputter etching tends to be very anisotropic. Selectivity actually increases down this way, so wet etching is much more selective. Sputter etching is totally non selective, and the energy of the process tends to be higher here for these. A lot more energy and damage than in wet chemical etching. The energies are just a few electron, or half electron volts, just chemical reactions. Just keep that picture in mind when you're looking at the different types of plasma etchers in the fab and when you choose which plasma etcher you want to use. This slide just summarizes the different types of processes, all of which can be occurring during plasma etching, depending on the particular etcher that you're using, just as an example. So I'm assuming this top layer is my mask, and this is the film that we're trying to etch. There can be ionic species present, of course, that can cause some sputtering here and trenching. There are reactive neutral species, like free radicals. They cause the chemical part of the etching right here, which can be isotropic and lead to undercutting. There can be mask erosion because the mask is attacked by the etchant, and that can change the shape eventually of what you end up etching. Probably the most important-- one of the most important features on this whole slide, you can barely see, I guess we should have highlighted it here, is this little thin slim area called sidewall inhibitor deposition. That's probably the most mysterious and yet the most important part of reactive ion etching today. Without that, we could not do CMOS as we know it today. The sidewall inhibitor deposition, what is it made of? Well, people don't actually know. Maybe some of the etched byproducts. It actually also depends on mask erosion. It turns out that the rate at which you form this inhibitor depends on the total amount of photoresist exposed that is on the wafer. So people will develop a process using resist as the mask. It works perfectly. They get nice vertical sidewalls, good sidewall inhibitor formation. Now, they go and they say, I don't want to use resist as my mask. I want to transfer the mask pattern to an oxide, and use oxide as the mask. You might try that. So they no longer have resist in the etcher when they're etching the poly or whatever. All of a sudden, the sidewall inhibitor doesn't form because there's not enough carbon in the etcher. You don't have photoresist. So sometimes, the presence of photoresist, you want a little bit of mask erosion just to contribute to the side wall inhibitor. The key about the sidewall inhibitor is that it's removed on a flat surface because the ions come down, and they can knock it off. It tends to not be removed on a sidewall. That's why it's called a sidewall inhibitor. This is a possible mechanism why ion enhanced etching gives you such nice vertical profiles, because of the formation of that sidewall inhibitor. So those are some examples of all the different things that can happen during plasma etching. Slide 31, I'm listing some manufacturing issues. These are fairly practical things, some general plasma etching conditions. So parameters that you have-- if you're going into the lab, an MTL, and you need to take an etcher, what knobs can you turn that you can directly control in a standard parallel plate system? Well, you have the RF power. That's generally a knob you can turn. You have the pressure to a certain extent in the plasma etcher. You can control the gas composition, which species you decide to put in and their flow rates. So in a standard parallel plate system, you may have power densities like in this range, 0.1 to 5 watts per square centimeter. If you want to increase the etch rate, you can turn up the RF power, so you get a higher density plasma. You'll get a higher self bias. That means the voltage drop between the plasma and the wafer or the electrodes will increase. That's going to increase the ion energy at the same time. An HDP system, you have an extra knob. You have a separate power supply for the plasma and for the wafer bias, so you can get a very high plasma density with a more gentle-- without necessarily getting such high ion energies. Plasma densities are much higher here, up to 10 watts per square centimeter. The power density at the wafer, there, though, and the ion energies, look at them. They can be much lower, maybe only 10 to 100 electron volts here. So faster etch rate with less damage to your wafer in an HDP system. How about the pressure ranges? These are very important here, shown on slide 32. This reactive ion etching, you typically would use something between-- the reactors are designed for 10 to 100 millitorr, and HDP system would drop that to the range of 1 to 10 millitorr, so maybe a factor of 10 lower. So if you have a higher pressure, you're going to get more gas phase collisions. This decreases the directionality, or the other way around, HDP has a lower pressure, longer mean free path, more directionality. But you also can pay a price in terms of the plasma density. Usually, if you have a higher pressure, you'll get a higher plasma density up to a certain point. Above a certain pressure, you get a lot of collisions between the gas molecules and the electrons, and that limits the energy electrons, and that limits the overall ionization rate. So pressure is an important knob that can be used to control not only etch rate in the plasma etcher, but also has effect on the sidewall profiles and things like that, and selectivity. One thing we haven't talked about much, I've sort of implied implicitly here that, in most people's minds, people think of plasma etching as being a room temperature process, and that is more or less true. But because we don't usually intentionally heat the wafer except certain exceptions, like aluminum etching, where the temperature of the etch system, most of the time, we don't intentionally raise the temperature of the wafer. However, the plasma puts a lot of energy into the process. You have a pretty high voltage drop. You have a large current flowing in there. V times I is power, so you're dumping power into this little space where your wafers are sitting. What does that mean? V times I power-- that power has to be dissipated somehow, and it comes out, a lot of the time, as heat. So the gas can get hot, and the wafer can get hot. But we don't usually need to heat it itself in order to increase the etch rate or improve the process. For aluminum etching, they sometimes do heat the system up to maybe 50 degrees C or so, and that's to keep the species volatile because it turns out the species that tend to be formed during the aluminum etch tend to not be very volatile at room temperature, and you'll never get them off. You'll get these inhibitor layers, and the etch will slow down. So to remove byproducts, sometimes, you will see people heating the wafer. This unintentional heating though, is much more of a problem. The wafers can get up to 100 degrees C without too much trouble. This needs to be controlled very carefully these days because that side wall inhibitor deposition tends to go down as the temperature goes up. So you never want to have the wafer getting so hot, self-heating, that the side wall inhibitor starts evaporating away. Next thing you know, you lose your side wall inhibitor, and you're-- all of a sudden you have a very poor looking profile, very isotropic. And that can happen unintentionally just because you turn up the power a little bit too much, and all of a sudden, you're putting enough power in. The wafers are heating, or you change wafer size or something in the plasma etcher, and you get too much heating, not enough sidewall inhibitor, and then your anisotropy goes through the roof-- goes to pot, so you're in trouble. So these days, the trends are to have better wafer temperature control. People have heat removal at the chuck. They have very careful clamping of the wafer, tightly controlled onto the chuck. Sometimes, they have helium gas flowing on the backside of the wafer to try to take heat away, just so you can control it because you don't want this etch-- especially if you're running the reactor through 25 wafers. The first wafer might be kind of cool, but by the time you get to the last one, the whole thing has heated up. And next thing you know, from wafer to wafer, you're getting a lot of variation in the etch profile. So controlling the temperature is also more and more important these days. Loading effects, there are things called macroscopic loading effects, and this is because you can deplete the etch and species across the wafer or across the whole etcher if you're doing more than one wafer at a time, so you'll see this. You put one wafer in. You measure the etch rate. Everything looks great. You have it. Now, you put all 10 of your wafers in. All of a sudden, the etch rate goes way down, so now, you screwed up your etch. So in fact, when you measure the etch rate, you should have put 10 in, 10 dummies, because otherwise, you can have this loading effect, and you don't really measure the etch rate appropriately. So it's kind of difficult to control. People get around it just by always using the same number of wafers with the same amount of material exposed every time. If you change that, you need to remeasure your etch rate. It's a pain, but it's a practical reality. That's not that hard to solve. This one, though, on page 35, this micro loading, it's a little tricky. And this refers to the fact that the etch rate can vary over very small distances on the surface of the wafer. And why is that? Well, it's because the density of the open area, the area that you're etching. The density of the area that's reacting can vary over small distances depending on the chip design. In one area, you may be doing a lot of etching in the chip. Another area, it might be mostly photoresist. So you have this sort of micro loading effect, so you get differences analogous to the macroscopic loading. Well, what people do is they put in dummy structures intentionally, where you don't really need to do an etch, but you're doing etching of the active area there anyway just to make a more uniform appearance to the plasma of the surface. This one is even trickier, number two. You may have differences in the aspect ratios. So you may have a portion of the wafer where you're etching a structure that's very closely spaced, and then you'll measure the etch rate here, shown in this SCM micrograph. The etch rate will be lower when you have a higher aspect ratio trench. On the same wafer right next door, a few microns away, the trench is etching faster where the aspect ratio is smaller, so there's a bigger opening between the trenches here. The ions can get down. The etch rate is a little faster. So right next door, depending on the feature, density, and dimension, you're getting a different etch rate. This is sometimes called aspect ratio dependent etching, or people call RIE lag. This guy is lagging behind this one in some ways, so something to be aware of when you're etching complex patterns. So why might this happen? Well, you may get depletion or trapping of the reactant species when you're traveling to a bottom of the trench. You can imagine these reactive species have to make their way to the very bottom. They may react the longer the distance they have to go, when it's a very tightly controlled tube, they may not have a chance to get to the bottom. You may have distortion of the ion paths due to charging. You may even have shadowing effects. The point is that the net result is the probability of a reactive species getting to the bottom of a trench is very narrow, goes down as you make the trench more and more deep and more and more narrow. So that's something we need to take into account when you're doing these kind of dense patterns. So slide 37 is kind of the summary of this introduction. We talked only a little bit about wet etching. It's primarily a chemistry problem, chemical reactions. Very good selectivity, but it etches the same amount in each direction, the same rate. It's isotropic. Dry etching, there are two kind of species in the plasma that tend to be important, the reactive neutral species or free radicals, just free fluorine. It's neutral, so it doesn't get accelerated, so it's fairly isotropic, but it's also quite selective. Ionic species, on the other hand, can be accelerated towards the surface. They tend to have more of a physical mechanism. It can be anisotropic, so they etch vertically much more than laterally, but therefore, sometimes not very selective. In plasma etchers today, there are different mechanisms. There's purely chemical mechanism, which is based on neutral species. There may be sputtering or physical sputtering by acceleration of the plasma, and the most popular is the sort of ion enhanced etching, where ions enhance the chemical processes, or ions remove these inhibitor layers, which is key to keeping the etch going. There's a lot of different types of plasma systems, just literally dozens of them. There's barrels. There's the plasma mode, RIE high density plasma, and there's even sputter etchers and lots of variations. So these tend to be purely chemical in the barrel range and purely physical if you're in the sputter etcher. Most of the systems these days operate in between those two limits, a little bit of chemistry going on, and a little bit of the physical etching. So that's about all I have to say introduction. Please go ahead, read chapter 10. Next time, we'll talk in detail about how you etch polysilicon gates with good profiles and good selectivity. Also homework 5, your last homework is due. You can put that in the folder. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 16_The_SUPREM_IV_Process_Simulator.txt | JUDY HOYT: Get started. Just a couple of reminders. Homework number 4 is due today up front. You can bring that up at the end of the hour. Homework number 3 is going back. I've got it in the back there in that orange folder. So you can pick up your homework before you leave today. And there are a couple people who haven't picked up prior homework, so they're in the back of that orange folder. We've got one handout for today. That's handout 27. Today we're going to be talking about the SUPREM-IV four process simulator. This is where we are, November 2 here on election day. And you're handing in homework number 4. The solutions to homework number 3, you're getting your graded homework number 3s back. The solutions will be on the web later today, and I'll bring hard copies next time. They didn't get printed or xeroxed in time. OK, so let's go on to handout number 27, which is today's handout for today's lecture. This handout or this lecture is on so-called quote unquote "introduction to the "SUPREM-IV process simulator." Now, you've already really been introduced to it. Fortunately, you've been using it for the last couple of homeworks in your homework sets. And your TA introduced it in a practical sense a number of weeks ago when she gave a lecture when you had your first SUPREM homework. But I want to talk about it in a little more detail and maybe give you some examples of what it can do, to give you an idea. So in the very first lecture, we discussed an example of a CMOS process flow. And we drew a series of cartoons, PowerPoint artist's conception of what things would look like. At this point in the class, at this point in the class, we've covered some of the fundamentals and the modeling of a lot of the key processes in that CMOS flow. We've talked about thermal oxidation, diffusion of dopants. We just had four lectures on ion implantation and transit-enhanced diffusion. And we've talked a little bit about how these processes fit together. So today I want to review some of these concepts and some CMOS process modules and CMOS flows now in the context of what we know about models and in the context of using the SUPREM-IV simulator. This is just an example of an n MOSFET and a p MOSFET fabricated on a wafer. This was the cartoon that we showed earlier in this course, in the very first lecture. And what we can do today is we don't need to use cartoons anymore. As you know, you can use the SUPREM-IV simulator to generate this type of information reasonably accurately. So let's go on to slide number 2. And I have a series of examples I wanted to go through. And some of these things, since you've already run the simulator, you may be familiar with. But we'll go into some things you may not have seen. The first three are 1D. SUPREM-IV, it's a two-dimensional simulator. But it can be run in a one-dimensional mode where you're primarily interested in what happens in depth. But in general, you run it in 2D mode. But to run things quickly, to understand basic physics concepts, it's often easier to run the simulator in 1D mode and then later on run a two-dimensional example. Obviously, when you do it in 2D mode you have a lot more calculations to do and it's a lot slower. So first, I want to talk a little bit about boron segregation at a gate oxide silicon interface and compare a couple issues. And I'm using this as an example for you to understand a little bit about gridding issues that can take place when you use the simulator. So a very practical issue of keeping an eye on how fine your grid is. The second example, we'll talk about comparing some arsenic and phosphorus implant profiles, the different types of profiles, to actual SIMS data. We'll see which ones do a better job at actually simulating or reproducing what the data looks like. And then we'll talk about diffusion models, introduce the different types of diffusion models that are available. And again, these three are 1D. The last example-- I hope we have time-- is 2D example of an older technology. It's a 200-nanometer gate length n MOSFET just to go through some of the processing of that. So let's go on to slide number 3. This is an example, and if you've already done the homeworks, you've probably seen this. This is an example of what's called the mesh or the grid. And SUPREM-IV uses a triangular grid, triangular mesh. So it's a series of nonoverlapping triangular elements. You see each one of these. If you look at it carefully, it can be drawn as a triangle. The mesh is very important because what we do in the mesh or the grid is that we-- to frame four tracks, the values of all parameters numerically along a moving adjustable grid. So at these nodes, at these grid points is tracking the values of all-important parameters, the interface-- what the material is, the interface between two materials. It tracks the dopant concentration value. A lot of different things are tracked at each grid point. If you look at this grid, so this is a grid that belongs to a device structure. Let me see if I have in the next slide. Oh, that's a blow up. Yeah, let's just go a little bit of a blow up on slide number 4. You can see the grid a little bit better. This belongs to a device structure where over here on the left-hand side there's actually a gate. There's a polysilicon gate. There's a space there. I don't know if you can see. Here's a gate oxide. And there's some metal up here. And this is the silicon. This is an ion implanted and diffused junction. And you say, why does the grid look like it does? Why would it look like this? You see? In this region here, near the channel and near the surface, you see a very fine grid. The mesh is very fine. The spacing between the grid points is very small. Whereas, you get down here in the silicon substrate, and the mesh is very coarse. The spacing between points on which you do the solution is very large. So why would that be? Does anybody have any idea why up here I would have a fine grid, and down here I have a coarse grid? AUDIENCE: Because the term-- does this change anything? JUDY HOYT: Right, because up here at the junction you probably have some doping profiles that are changing very rapidly in depth. So you need a lot of grid points to maintain good accuracy and good fidelity in reproducing those profiles. Down here, there's really not much action. Probably just constant doping of boron. So there's no need to have very fine mesh. Even today's computers are only so powerful. So SUPREM tries to do intelligent gridding and not waste and put in a lot of grid points where you don't need it. If you use a very fine mesh everywhere, you'd say, well, that's the best, right? Make it fine everywhere. The problem with that is it would take too long to the solution, and it's really a waste of time. So just going back to slide 3, this is exactly what we just said. The grid needs to be fine near the active device region, near the interface, or where any quantity of interest is changing very rapidly, depending on what you're interested in knowing about, doping profiles, interstitials, vacancies. Wherever they change rapidly, you need a fine grid. For diffusion, very often SUPREM does a finite difference solution for oxidation. It uses a finite element type of technique, again, both on this grid. OK, let's go on to slide number 5. Actually, I probably should have shown this first. Of that mesh that we just showed, this is the actual device structure. It's half of an n MOSFET. And now it becomes pretty obvious what you were looking at in terms of the grid. The yellow here is the silicon. This is the polysilicon gate. The blue is the gate oxide. This is an oxide a spacer, a sidewall spacer right next to it. There's a metal contact here to the silicon. And this is the sort of LOCOS region, or the isolation region, in between devices. This is either the source or the drain, and you can see contours here corresponding to arsenic doping. So this is the actual structure. Very often, in SUPREM, things are symmetrical. So another way, besides doing intelligent gridding, if you have a symmetrical device, you can often only simulate half of it and then just project that about the center line here because the MOSFETs are very often symmetrical in the way they're fabricated. So sometimes that helps you save time in the simulation. So again, once you start-- we haven't had do very many two-dimensional simulations. But once you start doing them, you'll see how much time it can take to simulate all these diffusion, especially if you're doing the most sophisticated models. So you have to take advantage of intelligent gridding. I want to start on slide number 6 the first example now, which is a simple one-dimensional example that looks at the behavior of boron and how it behaves and segregates at an oxide silicon interface. And this is a SUPREM-IV input file. As you know, all of you have done your homework, SUPREM-IV takes text files as input. You can write the text in any text editor you want as long as it doesn't leave a lot of spurious characters that confuse the program. And the program has a parser that reads in the text and interprets the text in terms of the commands that SUPREM-IV knows about. Whenever you see a dollar sign, of course, that's just a comment. So SUPREM ignores that. So this is only for your own notation. So the first command here is called the mesh command, and it tells you that a parameter called grid dot fac equals 0.04. And then the grid factor has to do with the fineness of the grid. The smaller the number, the finer the mesh. And so we're defining in this one statement what we want to make the default mesh size in a uniform sense. AUDIENCE: [INAUDIBLE]. JUDY HOYT: 0.04, it's not in units. I think it's a multiplier. So I think it can be anything from 1 to 10 to maybe 0.01. If you look up in the SUPREM manual, it will tell you. I don't remember the exact definition of grid dot fact, but when grid dot fac equals 1, it gives you a certain grid spacing. My experience has been that, for most modern devices, a grid dot fac of 1, it's a pretty large spacing, larger than we would probably want to have. I think I have some examples where we varied this in here, so we can look for the different grid dot facs. And you'll see how it comes out. The next thing is pretty simple. It says an initialize statement, this says just create the silicon substrate. That's all it does. It tells you the orientation in the z-axis is 100. So that's the orientation of the wafer. And it's boron doped, and it's 1E17 per cubic centimeter. And again, if you need to know things like boron equals 1E17, you wonder what the units are, if you go into the SUPREM-IV manual, it'll tell you what the default units are for any variable. This is an implant statement. You just did a homework on ion implantation, where we're implanting boron. So this is the species. This is the dose, the number of atoms per square centimeter, the energy. And you're specifying a tilt angle of seven degrees. Depending on the dopants, there will be different default models. A lot of the dopants default to the Pearson-IV model. In other words, if you don't say what model to use, it automatically uses default Pearson-IV. I think arsenic defaults to dual Pearson. You have to look up in the SUPREM manual and figure out what the default is if you don't specify. So remember, just because you didn't say a model, SUPREM has to assume one of the models. And so that's the default model. You need to become familiar with what that is. This statement here is deposing. It's a deposition statement. So it's not a growth. It's not thermal oxidation. It's literally depositing oxide just by plunking it, (WHIT), on the surface. You're not consuming any silicon. And we haven't talked about this type of process before, but it's called chemical vapor deposition. The next several lectures we'll talk about that. So this is not thermal oxidation. We're just plunking an oxide down at a certain temperature, 600. And we're telling it the thickness in microns, 0.005 microns. So it's very thin. It's 50 angstroms. DY, if you look in the SUPREM manual under the deposition statement for oxide, DY tells the grid spacing in the y direction that you want to use in the oxide. So DY is the grid spacing specific to the oxide. And look how small it is. It's 0.0001. Well, the reason it's so small is because the thickness is so small. You don't want one grid point in the oxide. You'd like to have several. So we're telling it what the gridding to use just in the oxide. And then there's a simple statement in which you select what variable you want to plot. In this case, you want to plot the log10 of boron. You can do some fancy-- put in titles and all that stuff. The Save File command saves the file in a format that SUPREM can then later use, read in and use later on. It's not necessarily a format that you would find very useful. But SUPREM uses it. So Save File into an out dot file is-- defines which file you want it to write to. And this is the particular name we chose, B1 dot inp. You can put it wherever you want. Here, you're simulating the diffusion. So this is simulating diffusion for a time of 17 minutes at a temperature of 850. And if I specify an inner ambient, it's just like a diffusion, like an argon or nitrogen. If I specify dry O2, then you're actually doing an oxidizing. So oxidation is done, as you know now, by using the diffusion statement and specifying what type of [? ambient ?]---- wet O2, dry O2, whichever. And then, after this amount of diffusion, 850 for 17 minutes, you're going to plot what the profile looks like of the boron after that. So we have an as-implanted plot. And we have a certain line type and color, here, color number one. And we have after diffusion at 17 minutes at 850. This is a simple file. It's a type of file that you've been using that your TAs been creating for you. And you've been running them. Now we want you to start to understand what those files do so you can create them and modify them yourself. And remember, you can always go to the SUPREM-IV manual to look up any of these commands, deposition, select, Save file. All those are explained in the manual. So this is the output of that file on slide number 7. This is what it looks like. Of course, I've doctored it up. I've added a little color and things to make it look a little better, but it's the basic output. What that file puts out is a plot. On the y-axis is the boron concentration. The x-axis is distance. And SUPREM assumes that 0-- it puts 0 at the original silicon interface, or surface, I should say. So if you deposit layers on top of that, they will be sort of on the negative x-axis. Remember, we deposited 50 angstroms of oxide. That's what I'm showing in the yellow here. If you go back one minute and you look at deposition oxide, thickness is 0.005 microns. That's 50 angstroms. So that's what this is, from here to here in this yellow region. And there are two profiles that are shown here. The black one, the dark one is SUPREM plotted the boron profile immediately after ion implantation. So that's the as-implanted. And the red one is the boron profile after that anneal that we did, 17 minutes inert at 850. Of course, it's diffused. The peak has gone down a little. It's broadened out. And there has been a little bit of segregation at the oxide silicon interface. The diffusion of boron and oxide is relatively slow. It's about a factor of 2,000, roughly, a factor of 1,000 slower than diffusion in silicon. So you see a fair amount of motion of the boron and silicon. But when it hits that wall, hits that oxide, it doesn't diffuse very fast. So you're not going to get-- you see in the silicon there's very little boron after 17 minutes has actually diffused in. If you want to know the segregation coefficient, you can look it up in the appendix. They have an appendix in SUPREM-IV for each of the dopants. For boron segregation, which is the ratio of the concentration of boron in the silicon to that in the oxide at 850. SUPREM calculates it from this formula. You can modify either one of these constants, by the way, if you want to. And I calculate it at about a factor of 10 to 1, or 0.1. So that would be this peak value to the value in the oxide. So that's what that simple file puts out. Now, let's look at a couple of other special cases. So we're going to now do something similar. But instead of depositing the oxide straight down by deposition, from chemical vapor deposition, we're going to form the oxide by thermal oxidation. This is the type of model that we've talked about before. And we're going to display the output. Here on slide number 8, I'm showing a new SUPREM input file, or command file. And this time, instead of saying deposition, we're going to say to do a diffusion. Time equals 5-- that's five minutes-- at temperature of 850 in dry O2. So it's going to grow five minutes worth of oxide at 850 in dry O2. And then we're going to plot it using color number 2. And then we're going to do another five minutes and plot it again. So we can do sequential. So that will be after 10 minutes. So you can look at it and see how the oxide grows and see what happens to the boron profile as the oxide grows. So we're going to display it at 5, 10, and 17 minutes. We're going to display the boron profile in the oxide. So on slide number 9, this is after five minutes of now thermal oxidation. We have an initial oxide that's formed. And this is what the boron looks like. And you notice the boron near the surface has been depleted, again, because of that segregation. And now you're oxidizing, so you're consuming silicon. So you're moving into the silicon. This interface is moving towards the right. And same segregation as the previous case. And now we have a little bit of boron that's been incorporated in the silicon, not so much by diffusion because, again, but by consumption because, remember, we initially implant it with boron. And then you start oxidizing. If you go back to the command file, read in the as-implanted structure. So remember, we did a save file of B1.imp. How did I get that? Well, that save file-- so this is a way to just save time. After we did the as-implanted, we saved the file in something called B1 dot inp. And then we're going to call it back later, that same file, in a subsequent SUPREM run. So here, I'm loading in a file called B1.imp. If you didn't want to do that, you could repeat the simulation of the implant. It's not going to take very many minutes. But if you have a complex two-dimensional simulation, you might as well save it. It takes an hour to run. Save it. And you can always use it again as the input to another SUPREM simulation, another diffusion or oxidation. OK, so we are going to-- in this here back on slide number 8, we're going to load that in, take whatever oxide was on there off just to make sure it's not there, and diffuse it at 850 for five minutes in dry O2. We're going to grow a couple of different thicknesses of oxide. So I've consumed this, and that's how a lot of the boron has gotten in there. And you see the segregation from this point, from here to here, that ratio is about a factor of 10. That's what we expect because we said k0 for segregation of boron in SUPREM-IV is about a factor of 10. Go to the next slide, slide number 10. After another five minutes, so I have a total of 10 minutes of oxidation. Again, you see, now you've grown a little more oxide. The yellow is thicker. And you've consumed more of the boron, and the segregation is causing this effect. So this is not really diffusion because the diffusion, look how rapidly the boron drops. If it were diffusing, this would go straight through the oxide. But it's not because there is slow boron diffusion in this assumption in the oxide. You see the boron profile in the oxide. And if we finally go to the last, which is after the full 17 minutes, we have grown now 50 angstroms of gate oxide at 850 in 17 minutes. And the red is what it looks like, the boron looks like due to thermal oxidation. You see a lot of the boron has been incorporated into the oxide because of segregation effects and consumption. The black is as-implanted, so that was how it was as-implanted. If I thermally oxidize it, I get the red curve. You can see it's depleted at the surface, and it's piled up in the oxide. Instead of thermally oxidizing, let's say I didn't want to grow my gate oxide, I just deposit it, 50 angstroms at 850. The boron profile is quite different. It's the blue. So why is that? So there's two different cases here. In the case of if I do an implant and I thermally oxidize it, I consume silicon. And I suck some of that boron into the oxide, and then it segregates by segregation. So you see? The boron concentration is relatively low at the surface. If instead of thermally oxidizing I just implant the boron, place an oxide down there by a chemical vapor deposition process, then I don't consume any of the silicon. So the boron stays a lot higher. So I have about a factor of two more boron at the surface. So you can imagine you would get different device characteristics due to this slightly different ways of forming this oxide because it has an impact on the boron profile by different diffusion. Well, the diffusion coefficient's the same, but it's really different segregation and the fact that in one case we're consuming silicon. In the other case, we're not. OK, that's just a simple example. I had to use-- you see all these points. I had to use a really fine grid in order for you to see this effect. So let me talk a little bit about that grid dot fac and the gridding. When you have very shallow profiles, which is mostly what you're doing these days for CMOS, and thin oxides-- oh, OK, here we go. This gives us the answer. If you use the default grid, then the grid spacing is about 0.1 micron. So there's about 0.1 micron. That's actually bigger than a lot of devices. So it's much too crude. So we can multiply that by a small number, like the grid dot fac and make that grid spacing much, much smaller. So in this particular example, I'm showing you where we did that same implant of boron and then 850, 17-minute oxidation, with a grid dot fac of 0.4. It's still very coarse. You can see. This doesn't really have much of a shape to it. There's a point here where we've solved. There's a point here. There's a point here. But it's just very jagged looking because the grid is too coarse. And in fact, in the oxide you're getting almost no detail or information at all because you really don't have any grid points in the oxide. You've got one at the interface and one at the surface. So for this particular example, a grid fac of 0.4 is way too crude. So this is something you always want to check by doing a quick test. What kind of a grid do I want? If you go on to the next slide, slide number 13, here's a factor of 10, finer grid, grid dot fac of 0.04. Still, it looks pretty good. Now, it's starting to look like a smooth curve. It's not all jagged looking. And it has a shape that you would imagine could be associated with diffusion and-- ion implantation and diffusion. But the oxide grid, in the oxide, it's still too coarse because the oxide is thin. It's only on the order of 10 to 50 angstroms during this oxidation. So I'm only getting what? One point in the oxide. I have one at the interface, one in the oxide, and one here. That's still not enough. So if we go to the next slide, slide number 14, and as I mentioned before, we use this particular statement to make a very fine grid, and just inside the oxide. We don't want to make it that fine everywhere. Then we use this-- you can insert a statement called methoddy dot oxide equals 0.0005. That puts in a 5 angstrom grid in the oxide, which kind of makes sense. You have a 50 angstrom oxide you're growing. You'd like to have something like a 5 angstrom grid in that oxide. And now you can start to see what it really looks like. Remember, before I had one point here, one point here, and one here. It's a solution looked completely different. This is what it would have looked like if you just hadn't bothered to put in a special grid in the oxide. You'd think it was a triangular sort of profile. It's not triangular at all. The boron actually looks sort of flat topped in the oxide and then goes down very rapidly. So you really need to keep an eye on your grid. And you typically want to run problems or solutions for a number of different grid spacings to get a feel for what grid is appropriate and what grid gives you physically realistic results. OK, so that's a little bit of a trivial example on oxidation, but I just wanted to make the point of how important knowing your grid is because you can get a solution. You look at it, and you think it shows you something. And in fact, it could be physically not meaningful if you haven't used the right grid. So let's go on to slide number 15 and example number two. I want to talk about ion implant modeling. Remember, there's a number of different analytic models that we talked about. We said, you can use a Gaussian. You can use a Pearson-IV. You can use dual Pearson. And that there are tables in the literature that give the first three moments, the RP, the delta RP, and the skewness. Those tables have been tabulated since the 1970s or so. And they've been updated over time-- of the implant distributions. And they've been produced by theory and also by fitting to experiment. This is the most common analytic formulation that you will see in simulators. It's called the Pearson-IV. We talked about that distribution. And we said it does a pretty good job of replicating profiles into amorphous case. It does not model channeling all that well. Channeling requires the use of more parameters, generally by curve fitting. So a Pearson-IV is a single Pearson distribution. If you want to do channeling, you'll often see dual Pearson, meaning two different Pearson-IV profiles are added together. And lookup tables are used. One of the things you need to be aware of are inaccuracies in stopping powers, particularly the electronic stopping at low energies and even nuclear stopping powers. At very low energies, a lot of these stopping powers are not known that well. So if you're trying to simulate energies below, say, 10 keV, 20 keV, depending on the dopant, you may not get very good results. Dopants that are widely used in Silicon technology, they're constantly updating these tables to try to make them a little better, more accurately represent reality in experiments. But some dopants that are not used that often at low energies, you'll simulate, and you won't get a very good simulation compared to the actual data. So let's go on to slide number 16, and this is an example of a SUPREM input file where we're using tabulated moments. So SUPREM has in its database tables of all these moments. And it's using tabulated moments to evaluate and to plot for you an implant distribution. So here's an example of ion-implanted phosphorus. So we define our mesh. Here's a grid fac equal 0.1. So it's not all that fine of a mesh. The substrate is boron. We're going to implant phosphorus at a particular dose, [? 8014. ?] Energy, not all that low, about 30 keV, but somewhat low, a 7-degree tilt. And we're saying to use a Gaussian model, very, very simple model. Save it in a file called Gaussian.saved, and then plot it. And then we're going to plot it. In addition to plotting the phosphorus that it calculated, SUPREM has a nice way of-- using a command called profile, you can input into SUPREM xy data, a matrix of concentration versus depth. And that's called the profile statement. It reads in phosphorus. And the name of the file-- this is the particular name. It's kind of ugly, but you can call it whatever you want-- and use that to plot the phosphorus. So we can not only plot the SUPREM simulation, we can also plot the actual SIMS data. So this happens to be an implant for which I have actual SIMS data that I have entered into this. And we're going to compare that data to the Gaussian solution. So if you go on to slide number 17, you'll see. This is for that relatively low-energy phosphorus implant. The SIMS data here is shown in red. This is actual data obtained from a company called Charles Evans. I think we've talked about Evans Associates in this class. They used a cesium beam with a primary beam energy of 2 keV to profile. And this is what they got. And the black line is what SUPREM says it should look like using a Gaussian simulation. And these are default values that are tabulated inside the SUPREM-IV simulator using the standard LSS range parameters, RP and delta RP. Remember, you only get two for a Gaussian. So compared to the SIMS, it's not all that great. The simulation does not do a very good job. It's too simplified of a model. The stopping power is apparently too small, and the range is overestimated by the theoretical calculation compared to what was experimentally observed. Well, we can try another model. Same SIMS data. This time, we try the black model, which is the dual Pearson, OK? Dual Pearson uses more parameters. It simulates this sort of channel tail a little bit better. But again, the range and the skewness are both overestimated, way too much skewness. So what does this tell us? Well, this basically tells us that these tabulated moments are not perfect for all dopants. So take them with a grain of salt, especially phosphorus. Phosphorus [INAUDIBLE] used that much in CMOS technology, right? Shallow junctions for n-type are typically made by arsenic because phosphorus diffuses too fast. So not a whole lot of energy and manpower has gone into improving the moment tables for shallow phosphorus. So if you happen to be using phosphorus in your experiments, take it with a grain of salt if you're using these tabulated values. According to my data, it doesn't fit that well. Of course, there might be something wrong with my data. That's another possibility. So exactly what's reality, it's not always clear. But interestingly, on the next slide, exact same data this time compared to Monte Carlo simulation. Now, these Monte Carlo simulations take a lot longer. It's not an analytic. It's a numerical solution. You have to follow each ion into the silicon and see where it lands up and then statistically create a profile. And that's what the black is, OK? And you can see the black looks kind of jaggy because, again, you're statistically creating this profile. But interestingly, the black does a reasonably good job of agreeing with the SIMS. Doesn't mean they're right, but it kind of gives me some confidence that, when I see a simulation agrees pretty well with the SIMS data, at least in terms of its range and its delta RP, its width here, that's probably a pretty good sign. So there's enough physics in this Monte Carlo simulation to reproduce the data pretty well. But notice, this is how you tell SUPREM to do a Monte Carlo implant. You give it the implant statement. You tell it the species, the dose, the energy. Now, here you tell it how you want to do it. You see? In prior, I had said Gaussian or dual Pearson, or if you say nothing, it defaults probably to Pearson-IV. Here, you said Monte Carlo. Now, an important thing when you do Monte Carlo is to specify the number of ions you want it to shoot into the sample and to follow. So this is 25,000 ions, not very many, and tilt and rotation. You need at least 10,000 to get good accuracy. Probably, 100,000 is better because you can see here, after I'm two decades beyond the peak, I start to get a lot of noise. So the noise comes in. So I'm really only getting two decades of smooth data. That's with 25,000 ions. In a one-dimensional simulation, this isn't too bad. It might only take 5, 10 minutes. No big deal. But now this is just 1D. If I'm doing this in 2D across the channel of a MOSFET, that 5, 10 minutes goes to be three or four hours or more. So you have to really compromise to a certain extent. So what people will do sometimes is fit the 1D profile using Monte Carlo and then fit to it an analytic solution by changing some of the default parameters. And then use that analytic in the two-dimensional simulation as a way of getting around spending so much time. So that's just an example of how actual SUPREM outputs compare to real data. Let's take another case. So that was phosphorus. As I said, phosphorus is not all that widely used. Here's the case for 30 kilovolts, same energy, but this time arsenic, using the dual Pearson analytic. Again, the SIMS data is the red line. And the calculated is the black. And this time, with the dual Pearson, it does a pretty good job. You got the range almost just right and even the broadening, not quite, but pretty close. The shape are much more accurate than for phosphorus. I don't know why, but I'm guessing it's because shallow arsenic junctions are the way people make junctions these days. And so people have updated in SUPREM these tables, these tables of moments to fit better the experimental data. Arsenic is also heavier, dominated by nuclear stopping, which is a little better understood than electronic stopping. Remember, we said electronic stopping powers always have a little bit of fuzziness about them. Phosphorus is going to be lighter. It has more electronic stopping. Maybe that's why. But again, take the profiles you get with a grain of salt unless you've checked them out experimentally. Slide 21, that same data, this time with Monte Carlo. Excellent job. This looks even better. It's got the broadening a little better. So it looks quite good using Monte Carlo. Of course, Monte Carlo took longer to generate, longer to simulate. Another advantage of the Monte Carlo besides the fact that it seems to be pretty accurate, it has pretty good comparison to data, you can generate profiles of [INAUDIBLE] interstitials and vacancies. And we need those profiles. If you want to use those profiles, you can use them in subsequent diffusion simulations in order to simulate TED. If you do an implant and you don't tell the simulator to use a damage model, it's not going to be able to simulate TED because it needs to have some damage model. So Monte Carlo is also very useful for that purpose. OK, so let's go on to slide 22. So those were a couple of examples of simple oxidation, simple ion implantation. Now we get to the more-- a little more exciting models, a little more complicated. There are three major diffusion models in SUPREM-IV. And these are what they're called. The way you invoke a different model in SUPREM-IV is you use the method statement. And there are three methods for solving the differential equations associated with diffusion in SUPREM-IV. There's PD. That's Partial Differential equation. PD dot Fermi is a method that takes into account the impact of the Fermi level, just like the name would suggest on the dopant diffusion coefficient. For example, this is an equation that SUPREM uses for n-type dopants. You're very familiar with this. It has an n over ni, and it may be an n over ni squared depending on the dopant. And it has stored in it values for D0, D minus, D double minus for all the dopants. It models concentration-dependent diffusion, but it does not model TED or OED. So it's quite simple. It's very fast. And the nice thing is there's relatively few parameters, just a few D parameters. And you can get your answer. So that's the nice thing about it. And it does model concentration-dependent. So this is the first thing you would use because it's relatively fast. The next method, the next level of complication is PD.trans. And it takes into account the Fermi level. So it already has this model built in, the Fermi model, as well as the impact of nonequilibrium interstitial and vacancy profiles on the diffusion, but not the other way around. It will not show you the impact of dopant diffusion on the motion of interstitials and vacancies. So it's not fully coupled, but it does take into account the impact of the ions on the dopant diffusion. It's very useful for OED. So people tend to use it for oxidation-enhanced diffusion. Or you can use it for transient-enhanced diffusion with relatively low doping concentrations. If you have higher doping concentrations, then the coupled diffusion of the pair, of the dopant plus the point defect actually affects the point defect profile. And so you need something more sophisticated than PD.trans. And this is the model it uses, the basic concept that you should be familiar with now, if you've done your homework, is that the diffusivity is the unperturbed diffusivity times something in parentheses, where it depends on F sub i. So it takes into account the enhancement in CI over CI star that can happen when you do an oxidation or the enhancement in CV or CV star if you do a nitridation or if you do an implant. And these things get enlarged or suppressed. So it'll take that into account. The last method you can use takes the longest in terms of it can be very long simulation times, PD.full. The name comes from the fact that it is fully-coupled diffusion. And what does that mean? It means that the interstitials and vacancies impact the flux of the dopants, and vice versa. The flux of the dopants impact the interstitials and vacancy diffusion. This should be used most of the time if you are interested in transient-enhanced diffusion. And in high concentrations, in general, for instance, the emitter push effect, we talked about how phosphorus can pair with interstitials and, by diffusion, drag the interstitials into the substrate. When it goes substitutional, it releases those interstitials. Those interstitials then cause the boron base to broaden. That was the emitter push effect. You'll never be able to get that out of the Fermi model. There's no way. And even the trans model won't have that. So to simulate the things like emitter push effect, those types of fully coupled cases, you need this PD.full. So those are three different statements, and you can compare the results from all three statements for a given situation. Now, one thing I should say. As you go from here to here to here, you're invoking more physics, more chemistry, and also more parameters. That's one problem with this. Sure, you can model the emitter push effect. But there's a lot of parameters you need to know, the diffusion rates of the vacancies and the interstitials, their recombination of interfaces. If those parameters are in SUPREM, they always have default parameters. But they may not be accurate. So again, take everything you simulate with a grain of salt. If somebody has input something in there to the best of their knowledge, that might be the interstitial diffusion diffusivity. But maybe it wasn't well characterized at the temperature under the conditions that you're using. So you can get lots of different profiles, but how accurate are they represent experiment is something that you need to determine for yourself. So here's an example. Let's look at a couple of examples of different diffusion profiles. On page 23, this is a simple 1D arsenic implant, and then we're going to do a long-time diffusion. So the simplest thing-- is an implant arsenic at 30 keV, the certain dose, intermediate dose, 2.6e14, and diffuse it in a furnace at 1,000 degrees for 30 minutes. Before we did the diffusion, we put down an oxide cap, not by oxidation, but by deposition. And so that oxide cap helps prevent the arsenic from evaporating out of the wafer. So this is the actual command that was used that specifies the point defect model, methodpd dot Fermi. So we're using the Fermi diffusion model. And this is the diffusion statement, 1,000 degrees for 30 minutes inert. I have some SIMS data here, which is shown in the red. And then the black dots or the black circles are the simulation using a Gaussian model. Here's a Gaussian and a Fermi. And you can see it actually reproduces the data amazingly well. It means that somebody calibrated SUPREM pretty accurately to this particular furnace. The Fermi models good enough. And you say, well, how about all that-- why didn't you use PD.trans or fully coupled? How did you get away with such a simple 30-second simulation? Well, the anneal is long. So there's not going to be any big damage or TED effects after-- normal diffusion is going to dominate this. You need concentration dependency how box-like it is. So you need to use the Fermi model for Fermi-level effects. But you don't need to go and invoking necessarily TED or anything. So if you know you have a case where TED effects are not that prevalent, you might as well use the Fermi model. It'll go a lot faster, and in a two-dimensional simulation, that could save you a lot of time. Here's that same simulation here on slide 24. But this time, we used the fully coupled model. So when we did the simulation, instead of methodpd dot Fermi, we said methodpd dot full. And the simulation is the black line. It agrees still very well with the data, maybe not quite as well. But to within all the experimental uncertainties, it really does a good job. So the-fully coupled model doesn't make much difference-- that should say difference, not different, difference-- because 30 minutes, again, is much longer than the time scale for TED. At 1,000 degrees with this dose, I'm guessing TED time scale is probably less than a second or so. And in fact, if you go on to slide 25, we can even figure out roughly what it is. At 1,000 degrees, this was for 1e14 phosphorus. At 40 keV, TED is less than a second. So for 2.6e14, again, it's 2.6 times that, almost the same range, not much difference. So we're talking on the order of a few seconds. So a 30-minute anneal, clearly, TED is not going to be very important. That's why you get the same results. But this is a sanity check you can do with SUPREM. Make sure you get the same result for that 30-minute anneal with PD.Fermi and PD.full. If you don't, then there's something missing in your understanding of what SUPREM is doing. So there's a simple example. Now let's do something different. On slide 26, instead of 1,000 degrees for 30 minutes, I'm going to do ten seconds, 1,000 degrees for ten seconds. OK, now you're in a range where you can imagine there might be some TED effects. And this is the simulation we're getting. The as-implanted here is shown in the black. And I believe this was implanted with a Monte Carlo. In fact, you can tell it's Monte Carlo because, see all this jaggedness in the as-implanted? You'd never get that out of an analytic solution, right? The jaggedness comes from the statistical nature of Monte Carlo. So that's the Monte Carlo as-implanted in black. PD.Fermi, which does no TED, shows not very much motion of the arsenic for a 10-second RTA. PD.full you get a fair amount of broadening. So that's got the TED built into it. Now, exactly which one is more accurate, I really can't say. In fact, my experience with arsenic has been on our rapid thermal anneals, that SUPREM tends to overestimate a little bit the amount of motion compared to what we see by SIMS, if you actually do a SIMS profile of 1,000 degree 10 second. Of course, there's always the question of how well is your RTA calibrated. Rapid thermal annealing machines are extremely difficult to calibrate. They are nonequilibrium environments, right? It's not like a hot furnace where everything's the same temperature. The wafer is the only thing that gets hot. And measuring its temperature is kind of an art. So it's hard to say. So you have to be careful, and typically when you're doing rapid thermal annealing and simulating TED, you want to compare it to a couple of experimental results just to make sure that it makes sense. Now, I can make this PD.full look like whatever I want by changing some of the internal parameters in the SUPREM-IV simulator. If I change things like the diffusivity of the interstitials and vacancies, I can change what this profile looks like. So you have some latitude there. This is using default parameters that are built into SUPREM-IV. Let me go on slide 27 to another related example, where we're going to talk about how-- we talked last time about some clever experiments that have been done at the IEDM, where the order of the anneals-- we have the exact same anneals, but we're going to reverse the order which we do them makes a difference because that's going to dissolve a lot of 311 defects. We're going to show how SUPREM can actually simulate that. And first, I'm going to just remind you what the usual order of making a MOSFET is just from last time. Usually, you start with a wafer. You do your isolation. This could be your shallow trench isolation we talk about as oxide. Oop, that didn't come out very well. I'm going to implant the boron profile. This is an n MOSFET. I'm going to implant that early on the super steep retrograde. Grow the gate oxide, so we're growing the gate oxide. Form the polysilicon gate by deposition and etching. The source drain extensions are now implanted. Now, the source drain extensions bring with them-- remember, we talked about the reverse short channel effect. They're going to introduce a certain number of 311 defects that are going to cause TED of the boron that's underneath here. Put in the spacers. Now, the spacers, if they're nitride-- a lot of people today are using nitride and not oxide-- if they're nitride and they're LPCVD nitride, that's a pretty high-temperature process, about 800 degrees, one of the worst temperatures you can possibly use for TED. Why is that? Well, we saw that the time that TED lasts at 800, it can be quite long. And CI over CI star can be quite large. So an hour at 800 can cause a lot of transient-enhanced diffusion. And then after that, typically after you make the spacers, you do this deep source drain implant here for the contact regions. And then you do a final rapid thermal anneal, usually 1,000 to 1,050, maybe 10 to 30 seconds, something in that range. So the important thermal steps are, this nitride at 900, this nitride thermal budget at 800, and then a rapid thermal anneal at around 1,000. And now we're going to compare with SUPREM, how SUPREM thinks in 1D sense this would go. So we're going to just look at the effect of TED just on the arsenic diffusion itself. We're not going to look at the effect on the boron diffusivity. That would require a two-dimensional simulation. So this is a simulation of an arsenic source drain extension implant and diffusing it using the usual order, which is 800 degrees C step for the nitride deposition, followed by a deep source drain, and then followed by 1,000 degrees C 5-second RTA to activate everything. So here's the arsenic Monte Carlo implant. And you can see on this scale-- this is the distance in microns and the-- they didn't use a very fine grid. Good enough to get this profile, but for the as-implanted, you can see it's kind of ugly looking. And that's part of the reason the grid was a little bit too coarse. But again, with Monte Carlo, we were trying to speed up the processing a little. So here's 2 keV, very shallow peaks, at 1e15. And then the blue line is 800 degrees, 30 minutes anneal. And then the red line is 800 degrees, 30 minutes, followed by a rapid thermal anneal at 1,000 for five seconds, so 1D simulation. So almost all the diffusion really takes place at the 800 degree 30 minutes. And that's because of TED. That's not ordinary 800 degrees C. So in order to model this, this must be a PD.full. So it must be taking into account the transit-enhanced diffusion. Well, this is trans. I'm sorry. This is PD.trans. You can either use trans or full. This particular one is PD.trans. OK, so that's what it looks like, very little contribution of the RTA. And your junction depth here is about 0.1 micron. That's what this predicts. So why is that? If you look on slide 29, this is a plot-- SUPREM, in addition to the dopants, you can output the interstitials and vacancies. So here's a normalized concentration versus depth. And the blue line here is for CI over CI star at 800 degrees. And so you can see, CI over CI star after this anneal, at the 800-degree anneal, that is, it's pretty large. It's somewhere between 20 and 40 and near the surface. It's pretty big. That's the blue line. The red line, which I apologize you can't see. You have to write it on your-- it's the same as your x-axis, basically. The red line is just about one. You can't even see it on the scale. So CI over CI star at 1,000 degrees, after five seconds is one, which there isn't much enhancement left. And we saw that. Remember, the enhancement time, the amount of time that it lasts is relatively short. So this is after five seconds. So TED is already-- the interstitial concentration-- the 311s have all dissolved. And the interstitial concentration has gone back to CI star. So after five seconds, we still have a lot of enhancement at 800 degrees, and that's why we get all that TED in the 800-degree C profile. So now, on slide 30, we're going to do something a little different. We're going to activate the source drain extension implant prior to the nitride spacer at 800. So what we're going to do instead of doing the usual put the nitride down right after implanting, we're going to do a 5-second rapid thermal anneal after implanting. So we send it out for implant. We bring it back. We do a 5-second RTA first, and then we do another five seconds-- and then we do 800 degrees plus 30 minutes. We do that second. So we're changing the order. And you can see the red line now is after the implant plus 5 seconds at 1,000, you diffuse this far. And then if you add the 830 minutes, it only goes that far. It doesn't go very far because five seconds is 1,000 is really enough to dissolve all the 311s, right? The enhancement-- the TED time at 1,000 degrees is only a couple of seconds. So I dissolve all the 311s. Then you put it back in the furnace at 800, and you don't get-- this is just normal 800 degrees C diffusion. There's not many CI over CI*s now been reduced because I got rid of all those 311s. So the junction depth now, here, instead of being out here at 0.1, is only 0.07. So the exact same amount of time that the wafer spent in the RTA and in the furnace is just that the order of the operations was changed. And this is reduced, the junction depth is reduced because CI over CI star is reduced by about 3x during the 1,000-degree C step. So this is something-- these are all simulations. SUPREM can simulate the fact that 311s are generated by the damage and that they dissolve at different temperatures, at different rates. And then so it can take into account these types of effects. Again, it's using the trans model. The accuracy, of course, you always take with a grain of salt, whether the junction depth is really 0.06, 0.04, I wouldn't put my life on it, to be honest, because there's a lot of parameters in SUPREM. But the point is, you can fit your data-- assuming you get some data, you can fit data that shows the effect of the order of the different implants. There's enough physics built into it. You just have to get the right parameters. So the final process, just to show that what we did is the extension implant followed by 5-second RTA and 1,000 high-temperature nitride spacer at 800 and then another 5-second RTA. The reason I needed to add the last RTA is to activate the deep source drain. Remember, the deep source drain goes in last after the nitride spacer. So this is five seconds at 1,800, five seconds at 1,000. This is what it would look like. The final five seconds at 1,000 really doesn't contribute much to further motion. It's already done most of its motion because TED, at that point, is completely over. And if you've done it the usual order, 800 degrees, 30-minutes spacer, plus 5 seconds to anneal everything, you get this junction depth. So this the SUPREM predicts that reversing the order of the anneals makes a difference. Again, it's something you'd have to check experimentally just to be sure. OK, so that was a TED example. And then for the fourth example, this will be the only example I'll do where we do two-dimensional, a two-dimensional simulation. And this I took this right from the SUPREM manual. So you can run this yourself. This particular example is not one that I made up. I took it out of the example file, and this is on the computer. This example is one of the canned examples that comes with SUPREM-IV. It's a 200-nanometer gate length, so that's 0.2 micron gate length and MOSFET. It's kind of old fashioned now. But it uses something called self-aligned silicide, and we're going to talk about silicides in the next few lectures. So you'll get an idea. And I've put the [? input ?] file and columns. So this is the first part of the file, and this is the second part. Again, if you want to see the [? actual ?] file, it's in the SUPREM-IV directory. So this is TSUPREM-4. So example, here's the mesh using a fairly coarse grid, grid dot fac of 0.9. You can also define the mesh in different directions, in the y direction, in the x direction, so a little more sophisticated definition of the grid. We start out by growing the gate oxide. So here's 850, 25 minutes. And it's in dry O2 plus a little bit of HCl. So there's an HCl oxidation. And then you can plot the structure to see what it looks like after gate oxidation. And this says, whenever you see the word source, what that says is-- and then you have the file name that says, take all the commands from that file name and run them now. So TSUPREM-IV, in order to-- let's say you do the same operation in SUPREM over and over again. You always do a plot. You want to do several different plots. Rather than putting all those plot commands in the main file over and over again, you create sub files. And this file called S4EX10P.input is just a series of commands that defines the colors and things like that. So it enables you to call this file whenever you want it to create a plot. So it's a way of cleaning up your SUPREM input file. So if you want to know what this particular file does, you have to go look at that command file. Then you deposit some materials or depositing polysilicon. You tell it the thickness. You're etching the polysilicon to the right and to the left. This is to make a gate. And here you're calling that file, again, that plotting file, OK? Now, one of your homework problems that you just handed in was on-- or we just handed back to you, homework 3, was different methods. This is using the compressed method as far as solving for the oxidation rate goes. Here's 850 oxidation in dry O2 and HCl again. So this is what we call re-ox. So you formed the gate now. So the initial gate oxidation is just to grow a very thin gate oxide. You formed the gate, and you etched it. And then, in the course of etching, sometimes you introduce damage right near the corners of where the channel is going to be. Remember? This is going to be my channel. So you often introduce some plasma damage down here. You SiN up the oxide there. You do things that are not necessarily good for the gate oxide. So people often, at this point, will, in this technology, say, 0.18 micron technology, would take the wafer at this point, put it back in the oxidation furnace, and do what's called a gate re-ox. And this looks like we're doing a re-ox here at 850 for 25 [? minutes. ?] So that re-ox is going to help-- if you thinned the oxide at all in the course of your etching, it's going to help boost that back up. And it's going to grow a little oxide along the edge here of the polysilicon. So it's a way of dealing-- it was introduced for reliability considerations. Re-ex, as it says, take the gate, put it back in, and do another oxidation. Then you form the sidewall spacer. Here, we're depositing oxide. It's using an oxidation-- a deposition process. And then you do the deep source drain implant. This is 1e15 energy of 60 keVs, fairly deep. Then you're going to deposit some titanium. I won't go through the details of this. But in the next few lectures, we'll talk about the fact that-- when we get to the silicide lecture-- you can take metals, put them on silicon, and react them at a certain temperature and form a metallic phase. It's called a silicide. This is titanium disilicide, which forms a good contact. And then we look at the final structure. So that's the file, and you can go ahead and run that. And let me show you some examples of what comes out of that. Oh, actually, this slide, number 33, remember, I was saying you wanted to repeat this over and over again, these commands? This was that file, S4EX10P.input. This is just a plotting sequence file that you use over and over again that tells it what colors to use and how to label things according to what material is. So rather than type this in every time in your SUPREM-IV, you can put it in a separate file. That's just for your reference. So let's go to slide 34. This is from running that example. After gate oxidation, what does it look like? Well, you have an xy structure, so y is in depth. And there's a certain gate oxide grown everywhere across on the structure. That's relatively simple. After gate patterning, you have deposited this green layer everywhere and then etched it off everywhere to the right of this one line. So now, that's what the polysilicon gate looks like with the gate oxide underneath it after gate patterning, OK? And notice, it doesn't model etching in any sophisticated way. You just tell it where to cut it off. So it's not modeling any shape effects of the etch or anything like that. That's not that type of a program. We'll talk more about that when we talk about how to model etching. After gate re-ox, remember, I said we're going to put it back in the furnace and subject it to a step of 850 for 25 minutes to oxidize all around and to beef up this oxide. But interestingly, look what it's done to the gate oxide. You see what it's done. There's like a bird's beaking effect. Some of the oxidant has diffused underneath the polysilicon and diffused it. And you've got a little thicker oxide now here on this part underneath the gate compared to the center of the gate. That's not necessarily a good thing if it's too thick here because what do we know about the gate oxide thickness determines the threshold voltage in the device. So if I have a thicker gate oxide here than here, I'm going to turn on my channel at different places differently. So that's not necessarily a good thing. We want a uniform gate oxide. Here, of that same structure now, after I've formed a spacer, here's the oxide spacer. And we did a source drain extension. I'm sorry. I first did the 5e13 source drain extension. Then you formed the spacer, and then you did the deep source drain. These contours, by the way, correspond to arsenic in the substrate. Each one is a different arsenic concentration. You can see how it's shaped. This region here corresponds to the extension, This. Region to the deep source drain. This particular simulation uses PD.Fermi, so the Fermi model, very simple. And then after silicidation, SUPREM doesn't necessarily go to a very good model of siliciding, but you see it's reacted. We'll talk in subsequent lectures about what silicidation is all about. I mostly want you to get a feel for how a two-dimensional simulation of such a structure actually looks. There's the final structure here on slide 39, showing the arsenic concentration contours. So here, this region in between is your channel. Here's the polysilicon gate in green, and this is your gate oxide. And because of the reoxidation, we have a little bit of the smile effect. And see the way the device seems to be smiling, has a little smile here? That's actually a very happy device. But it's not actually all that good. Smiling is not very good. In fact, you want a gate oxide that's perfectly flat, has the same concentration-- same thickness as much as possible all the way across it. It got unflat because we did the re-ox, and a little bit of oxidation took place from the corners. So these days, for the very shallow, the very shortest channel devices, reoxidation is not such a popular thing to do because it does create this nonuniformity in the oxide thickness. In fact, you can go back. What I did was I went back here on page 40 and said, oh, OK, is there some way we can deal with this and maybe make the re-ox have a little bit less effect on the-- a little less of this extra oxidation right at the corner? So this is really what we're talking about just to give you a zoom in on slide 40. Here, actually, here's a zoom in of what it is. This nonuniform oxide thickness took place under the gate. And the real origin is the lateral oxidation under the poly during the gate re-ox step. This is when the gate re-ox was done at 850 for 25 minutes. So in fact, from the very first lecture of class, this is a real TDM of a real device to show you this effect is real. See the way this device is smiling also? So SUPREM didn't just make that up. The oxidant did get under here and oxidize in this region. So the vt is going to be nonuniform across this device. So you have to actually watch out for this. This does really happen, just to show you that SUPREM is based in reality with respect to some of these things. On slide 43, actually did a little example of how we can change the amount of this effect that happens, the amount of this nonuniformity. It may not be very clear if I'm looking at this. But on the left-hand side, what was done was, when we did the etch, we etched all the way down, OK, and then did a re-ox. So on the left-hand side, you're etching like this. And here's your gate oxide like this. And we're basically etching in the example where we told SUPREM we act as if we etched all the way down. So the oxide-- the prior to re-ox. So when we put it in the furnace on the left-hand side-- oh, sorry-- when we put it in the furnace for oxidation, this oxide was gone at that corner. And that's what you end up with on the left-hand side. So you can imagine, when that oxide is gone, then you can get a fair amount of attack in here laterally of the oxidant getting underneath there. So what was done here on the right-hand side instead, 50 of the 70 angstroms were etched. So we left a lot of the oxide on there and then just oxidized it-- re-ox was done at a lower temperature, 825 for a shorter time, 12 minutes. So you get some of the benefits of the re-ox without some of that extra. So in this example, when we did the etching, it didn't remove it all. It looked like this when it went into the oxidation furnace. It looked like that and then got oxidized. So, by refining the etch process, we didn't etch all the way down. And if you look carefully, I think on the next slide, it'll be maybe a little more obvious. The lateral nonuniformity is quite a bit reduced. This oxide thickness is much more uniform going across here. There's still a little bit of it, but 825 for 12 minutes, there's a lot less of that lateral nonuniformity. So it's an example of a process-- you can use SUPREM-IV to optimize reasonably efficiently and reasonably accurately in this two-dimensional model. OK, so let me go on to summarize. The simulators, like SUPREM-IV, which, by the way, I didn't tell you, but SUPREM-IV was originally written at Stanford. And sometimes that's called SSuprem That's where it came out of originally in the 1980s. It was then commercialized in a small company called TMA. That company was bought out by another company called Avant. And Avant later sold the technology to a company called Synopsis. Synopsis is a big design house. They make a lot of CAD software for designing chips. But they also support TSUPREM-IV. So if you want to have questions about the simulator, you need to contact people at this company called Synopsis. There are other simulators out there. SUPREM-IV is one of the most popular, but there's another company called Silvaco, which makes a competing product, which is called ATHENA. In any case, these simulators like this, they've been developed over the years to enable what we call physically accurate or robust or correct simulations of complex processes. However, I've tried to tell you to take everything with a grain of salt. Don't believe it just because you simulate it in SUPREM and it looks like that. One thing, you might have done it wrong. You might have used the wrong grid factor. SUPREM itself has parameters in it that are unknown, that somebody just stuck in there. Some graduate student writing the program said, oh, I don't know exactly what this parameter is. I'll put this number in. It's rough. Well, you need to find out what parameter is in there. And if you go to the appendix in the manual, it'll tell you in general what all the numbers are that it's using. And you can decide whether those numbers-- whether you like them or not. The nice thing about these simulations from SUPREM-IV or ATHENA, you can feed them into a device simulator, such as Medici or whatever, and then predict IV and CV characteristics, the actual electrical characteristics of the device. So they are designed to be coupled, the process simulator and the device simulator. And that's a very nice feature. There's a lot of new understanding that's being developed and issues related to TED and OED and other anomalous effects, low energy ion implants. I showed you some low-energy phosphorus where SUPREM doesn't do a very good job of modeling the data. Highly tilted implants with shadowing, oxidation of trenches. SUPREM can handle most of these situations because it has the physics built in. Exactly the accuracy, that's always the question mark. Whenever you're running these simulators, keep in mind the basic physical models. There are a lot of parameters that must be known accurately, and we don't know them all accurately. So keep your eyes open. A big caveat is keep the mesh or the grid in mind. When your grid is fine enough, you should be able to get the exact same solution with minor changes to the grid. So I run it for a certain grid. I cut the grid spacing in half. The solution should look identical. If all of a sudden the solution looks much smoother when I cut the grid in half, the spacing or much finer, then obviously the original solution didn't have a fine enough grid. So you've got to do these sanity checks. Run it once with a certain grid. Then use twice the grid points. Does it look a lot smoother? Well, that means you probably didn't do a good enough job the first time. And then do it again. And eventually, it'll converge. It has about the same smoothness regardless of your grid. In which case, your grid is probably fine enough. Your solution should be independent of the grid. Otherwise, that's not physically realistic. There's a lot of different process integration schemes. I just showed you one, just changing the order of the device, of the anneals. The simulators help us to understand these interaction between the various steps and how to optimize the overall technology in ways that you could never do if you just did this by hand. So you've used SUPREM-IV now. You're going to use it again in homework number 5. So I think gives you some idea of how powerful this tool is. But if you are going to really use it in your research, make sure you read the manual. It is very important because there's a lot of little things in there you need to know about. OK, that's it for today. Homework number 4, please bring up front. Homework number 3 is in the orange folder in the back by the TA. And I guess that's it. We'll meet on-- oh, somebody have the sign up sheet for your final project? Oh, great. Thanks. Make sure you sign up. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 5_Wafer_Cleaning_and_Gettering_Contamination_Measurement_Techniques.txt | JUDY HOYT: OK, so there are two handouts for today that are in the back there. There is the lecture notes and handout 8, and also, problem set number 2, or homework number 2 is going out today. Your homework 1s are graded, and I'll have them back next time. So this handout is a fairly long discussion. There's two major areas I want to cover in today's lecture. I want to finish up chapter 4. There was a section in chapter 4 we didn't get to talk about yet, which is about characterizing. We talked all about impurities and how they can be issues and problems. I'm going to talk about how we characterize those impurities. And then we're going to skip chapter 5. Remember, chapter 5 is on lithography. And we have a whole one-semester course here at MIT on lithography, Hank Smith's class. I'm going to skip that. You're welcome to read it, of course. But if you don't, you should go ahead and start reading chapter 6 on thermal oxidation. I'm going to start that. In fact, the next three lectures are going to be on thermal oxidation. And I've listed here-- I won't go through them all. You can read them later-- some of the major topics from chapter 6 that we're going to discuss. So let's go on to talk about-- finish up the characterization on slide number 2. This slide is supposed to give you an idea of the different surface analysis techniques and their detection sensitivity, how low of a level of an impurity can they detect? And I apologize right off the bat because not all of these acronyms are defined. And it's hard when you haven't seen these before to see all these ridiculous looking acronyms, Auger, [? OAES. ?] We're going to talk about Auger in this lecture. Raman, FTIR, we'll talk about a few of these. These are techniques you can see just based on where they lie on this vertical bar. The vertical bar is atoms per cubic centimeter. So here, 100% would be 5 [INAUDIBLE] 22. That's 100% of silicon lattice atomic density. If you want to measure an impurity on the level of 1 to 0.1 atomic percent, that level, you would use Auger, or [? ESCA, or ?] one of these types of techniques. Most of the time that's not going to be nearly adequate. It depends on what you're making. But we talked about needing to know impurity levels much, much lower than that in the 10 to the 10th or so atoms per square. This is if you integrate over some depth, or 10 to the 15th per cubic centimeter on the left here. So the types of techniques people use for that are listed here in this lower shaded bar. X-ray fluorescence, which we'll talk about and surface SIMS, these are the two most common techniques for measuring impurities. And you can see they measure them down to the something like tens of parts per billion range. So those are some of the options available to us. Next slide, on slide 3, gives you a rough estimate of the depth of analysis, the depth from the surface of some of these techniques. And I've circled the ones that will be of interest for this lecture. So X-ray fluorescence, which is the acronym is TXRF, it's samples typically the near-surface region depending on how it's done. But it's in the near surface region, say, the top 100 angstroms or so, roughly. Maybe it's a little less than that. There's a special technique called surface SIMS, secondary ion mass spectrometry. We'll talk about that. That's designed to, again, sample just the top 100 angstroms, just the surface region. And then there's a standard SIMS or depth-profiling SIMS. You can profile pretty much as deep as you need to. Typically, you profile here in the 1,000 to 10,000 angstroms or 0.1 micron to 1 micron. And this is a depth profiling type of SIMS. Whereas, this measures surface SIMS, as the name suggests, measures just what's in the very near-surface region. So those are some those three are three important techniques. Slide 4 is meant to be a generic view-- and I took this out of your text-- of how surface contaminants are measured in silicon wafers. A lot of the time, what you're doing is you're taking a silicon wafer, which is shown schematically on the bottom, with some surface layer, however deep it might be, with some contamination level in it. And we interact that surface with some kind of beam. It could be an incident electron beam, as shown on the left. And we look at what comes off. It could be an incident X-ray beam or an incident ion beam, as in Rutherford backscattering or SIMS. So a lot of these-- these are beam techniques, where we interact. We shoot a beam into the surface. It has some energetic interaction. Something comes off. Could be an X-ray, could be an electron, could be an ion. And we analyze what comes off. That's the basic idea, very schematically. Let's go on to slide 5, and slide 5 is about what happens when we bombard the surface of silicon with electrons. And what it shows very schematically is a plot on the vertical axis of the number of electrons coming off as a function of their energy. And you can see various humps here. At very low energies, there are a large number of electrons coming off. These are called secondary electrons. Maybe they're in the 5 eV range. These are the electrons that are coming off that are forming images if you look at a scanning electron micrograph. I've shown a couple of scanning SEM micrographs. In fact, last time we had that gettering picture with all those little dots in it. And you can see throughout the wafer, see all those dots which were the oxygen precipitates. That was a scanning electron micrograph. So that was looking at these secondary electrons. At the very highest energies are backscattered electrons. Basically, the incoming ones just bounce off the atoms and get reflected back with some energy similar to what you shot the electron in. They don't tell you much-- neither the lowest energies or the highest ones tell you much about what you bounced off of. But there is a region of intermediate energies here called the Auger electrons, which is shown in here-- there aren't as many of them-- that give very specific information about the species that they hit or that they interacted with because they interact with the atoms in the substrate with the core electrons of those atoms that are more tightly bound to the nucleus. So here, at these intermediate energies is where we can get some information about what it was that the electron interacted with, what element. So we go to slide 6. Again, this is a very schematic illustration of the energy levels involved in this type of process. So let's just go through the schematic. What's shown up here in the upper center dot is a primary electron, which is this black dot. It's coming in, and it's interacting-- in energy space, it's interacting with an atom, let's say, down here that has a certain energy level. And what happens is basically it knocks-- these are the energy levels, this Ek, El1, El2, these are meant to represent the core energy levels of the electrons that are bound closer to the nucleus in the core. We've been talking all about valence electrons up here in the valence band and electrons in the conduction band. It doesn't interact with those, but it goes into the atomic core and interacts with the electrons that are there. OK, and basically, it kicks out one of these core level electrons from this Ek energy, where this becomes a secondary electron. Now you have an open energy level. And one of the higher level electrons, this one at El, can drop down into that level. And the spacing of this level between the L1 and the K level is very much a characteristic spacing of the atom. Depends on the element, OK? So you can imagine that this energy here, this transition energy is very specific to the atom. That transition energy is then, in this particular case, given off as kinetic energy to an ejected electron, which is called an Auger electron. That ejected electron then has an energy that's characteristic of this level spacing. So by measuring its the energy spectrum of those outgoing Auger electrons, it tells me something about the atoms that were in the substrate, what those spacings were. And as long as you know you have a fingerprint for what those spacings should be, you can tell what atom the electron interacted with. So that's the Auger electron spectroscopy. There's another way to do it where, instead of releasing an electron, an Auger electron, you may actually release an X-ray. This energy spacing between El1 and Ek can be given off to eject an X-ray. So you may also look for heavier elements at the X-rays coming off. And that will give you some species-specific information. So you're get the qualitative idea here. I'm putting in some energetic particle. I watch transitions that happen in the core levels. And then I look at what comes off, either another electron or an X-ray. So that was an incident in that last slide, slide 6. The incident particle was an electron. You don't have to use electrons, as it turns out. You just need something that can bring some energy into the atomic core. So here on slide 7, you can imagine the incident energy might be provided by an X-ray. Doesn't have to be an electron. So you may bombard the surface with X-rays instead. And there are two different categories here, XPS, X-ray photoelectron spectroscopy, where you bring an X-ray in that provides the energy and an electron comes out, or X-ray fluorescence, where you bring an X-ray in, and another X-ray is emitted with a characteristic wavelength. And what's happening here in this XRF, this X-ray fluorescence, is pictured schematically here on the left for an element that's relatively light, has fewer shells in its electrons, in its core levels, and one that's relatively heavy on the right, which has lots of core levels. So you can imagine an incident X-ray coming in. It's knocking an electron out. And you get a transition energy between two core levels. And what comes off is an electron with-- an X-ray with some energy N nu with a characteristic wavelength. The problem with doing this with a light element is there are too many electron shells. So this process is not that favored, and it becomes difficult to do. For heavier elements, it's much more practical, heavier elements being the transition metals or things that are heavy, like silicon. There are lots of these core levels. You bring an X-ray in. It has a high probability of interacting, producing an X-ray coming out that has a characteristic spectrum. So you see, if you plot the X-ray density or intensity coming out versus energy, you see these peaks. And this spectrum becomes a fingerprint of what elements the X-ray has interacted with. So if we go on to slide 8, this is a very common technique that's used in semiconductor analysis. I'm showing here a typical, what's called, total X-ray fluorescence, TXRF, the spectrum. And the plot shows on the y-axis the intensity of the X-rays coming off in counts per second, detected as a function of the X-ray energy, so that H nu or whatever you want to call it, that's coming off. And again, these energies are going to be characteristic of the core levels of the atom. So next to each peak, the person who did the spectrum actually is familiar enough that they were able to identify the peaks. For instance, this peak right here, little above 2 keV, corresponds to chlorine. And you can integrate the area under that peak and get a rough estimate of the amount of chlorine atoms per square centimeter that exists on that surface. Silicon, of course, is going to be large because it's a silicon surface. There's a lot of chromium on this sample-- look at the peak labeled chromium-- and a fair amount of iron. So the nice thing, it gives you a broad spectrum. You can just shoot the X-ray onto the surface in about a 1-centimeter area. And you can immediately survey in the transition metals what's on that surface. It's very useful and commonly-used technique. So let's go on to slide 9. So this TXRF, what are some of the good things about it? Well, I just showed you, it's a survey technique to get a broad idea of what different elements are on the wafer. And it can be automated. It's reasonably quantitative. We can integrate those peaks and compare them to standards. It doesn't destroy the wafer. You notice the X-ray just comes in, interacts, and goes out. It's not like I have to sputter off, or I don't even have to break the wafer. You can do it on an entire wafer. They have machines for 12-inch wafers you can put in there. And they'll tell you what impurities are on that wafer surface. So that's really nice. A weakness, it doesn't detect low-z elements because the physics of-- there aren't enough core level shells. So aluminum, potassium, and sodium, both of which are very nasty, and silicon can't be detected. So that's a major limitation. It doesn't tell you about how it's distributed in depth. It doesn't tell you anything about the depth distribution. And it's pretty big spot. About 10 millimeters is the smallest you can focus the beam down. So you're looking at large spots. That's OK. But if you want to look, let's say, inside an individual device and see what impurities were in a bad device, if you had a fail device, TXRF is not the way to go because the device isn't much bigger than-- it's much smaller than 10 millimeters. That's the whole die on that wafer being passed around. And the surface has to be specular and polished. If the surface is patterned-- like on this wafer here, you see some patterns, it's not perfectly specular-- you're going to have a problem because you're not going to get the kind of reflectance. So it's typically done in an area that's relatively well polished. OK, there is an alternative technique, which actually is quite complementary. Where TXRF has weaknesses, surface SIMS has some strengths. So people often use both of them. This is an example of an ion beam technique. So instead of bringing in X-rays, I'm bringing in ions. And the ions that come in are called the primary ions or the primary particles. The two typical ones to use are either cesium ions or oxygen ions. So you have a primary beam of particles. Energy varies. This says 10 keV. In some cases, you can bring this energy all the way down to 1 keV. And there's an advantage in bringing the primary energy down lower. What you see is this primary beam comes in, and what does it do? Well, it interacts with the solid, the surface. And of course, the atoms, the ions penetrate in, and they get implanted some depth. They penetrate in some depth. That depth that they penetrate depends on their energy. So if I'm at 10 keV, I may go down to 100 angstroms. Now, the ions that come off, the atoms that are sputtered off, some of them are ionized. And that's what we analyze in the machine. And we measure the atoms that come off. They only come off the top 10 angstroms or so because that's the escape depth of which they have enough energy. These other particles that come off can get out. So that's good. We can sample the top 10 angstroms. But notice, I disturbed-- in doing the measurement, like Heisenberg, in doing the measurement, I disturbed the crystal. I shot a bunch of ions in here, and now they're down here at 100 angstroms deep. And they knocked things around in the substrate. So whatever atoms were down there, they disturbed them. They moved them around. They might have knocked them in deeper. So if I can lower this primary particle energy down to one keV, they won't be knocked in so deep. And they won't disturb as much of the crystal. So you get better depth resolution in general as you bring the primary ion beam energy down. Basic idea is you have a primary ion beam. It comes in. It sputters off a certain number of monolayers in a certain period. And you measure the ions that come off the substrate. And you count how many ions are of silicon, how many coming off are copper, how many are gold, whatever. And you get an estimate. You can get a spectrum of what's coming off. You can't count too many because it has to go through a spectrometer. And the spectrometer can only count so many at one time. But that's the basic idea of secondary ion mass spectrometry. If we go on to slide 11, I actually took this off the website. One of the commercial places that does a lot of this analysis commercially is called Charles Evans and Associates. And if you go on to their website-- you just search on Google. You can get to their website. And they have a lot of information on these techniques. I won't go into slide 11 in any great detail. I'll let you read it later. I just put it up there to show that companies have done a lot of work in order to quantitatively be able to measure in the top, say, 10 to 20 nanometers or 100 to 200 angstroms, quantitatively measure the contaminants in that surface region. It wasn't always possible to do that. But they have nice techniques to do that now. And if you're interested, you can do more detailed study on the web. Let's go on to slide 12, which lists some of the key features of surface SIMS. Well, it's been approved by ASTM for measuring some of the elements and exactly some of the elements that the TXRF can't measure. So that's nice. It can quantitatively measure contamination for sodium, aluminum, potassium, iron, on into [INAUDIBLE] silicon. And it can measure lots of the elements. But it's actually been approved as being a standardized technique. It's quite accurate. It can detect many elements and isotopes. But detection limits is quite low. Look at this. Look at these numbers, 10 to the eighth to 10 to the ninth atoms per square centimeter for most metals. Now, why do we care? Well, what was on the ITRS? When you read your ITRS in your homework, the number of atoms per square centimeter that people were interested in was what order of magnitude? 10 on the 10th, exactly. So you better be able to measure below that. Otherwise, you can't even tell what you have. So it's right in the range where we need to be measuring for the ITRS. It also can give you information about the profile in depth. And that's kind of nice. If you go back, if we go back a couple slides to slide 10, what happens is you collect these ions as they come off. And you measure them as a function of time. In the first couple of seconds, you sputter off the first few monolayers, the second few seconds you sputter deeper. So you're constantly sputtering the surface of at a constant rate, and you're measuring what comes off as a function of time. So you know by collecting this as a function of time pretty much exactly what depth those ions came from. So if you want to sputter a little deeper, you can get some information about the profile. And finally, going back to slide 12, it has a relatively small detection area, say, 50 by 50 microns. Still not that small compared to the size of a device. But it's better than 10 millimeters. So you can get in-- you can put special structures on the mask with this kind of size area that are dedicated for device characterization if you're doing research and you want to be able to measure on the chip, measure the characterization, the contamination. Let's go on to slide 13. What I'm showing here, again, I took this off of the Charles Evans and Associates website. And it's a little bit dated. It's a few years old, but it gives you a rough idea of comparing total X-ray fluorescence, TXRF, in these columns on the left, to surface SIMS. In terms of its detection limit, so the detection limit is the lowest concentration, which either technique can measure. And so you can get a rough idea. Let's say just for comparison, let's look at potassium here. TXRF doesn't measure it very well. The lowest it can measure here is about 20 times 10 to the 10th, or 210 to the 11th, not all that sensitive. At least a few years ago, that was the limit. Maybe they've improved it. But look at the detection limit using surface SIMS. It's 0.01 times 10 to the 10th or 10 to the eighth per square, much more sensitive. So it gives you an idea of what the detection limit is. So if I know that I'm concerned about potassium, I would shoot for SIMS, not TSRF. So it's a nice quantitative comparison for the different elements. So I went through that relatively quickly, I admit. But you can read in your textbook on the characterization techniques. In chapter 4, they're covered. And I would also encourage you, if you're doing research in this area and you need to know more, go to the Charles Evans website. They are a commercial vendor. They do sell characterizations for a business. But they're the world's experts as a result on that. So it's a nice place to get information. So I want to move on now starting with slide 14 and talk about oxides and the IC industry. And I think we hinted at this in the first lecture or two. But really, it's silicon-silicon-dioxide interface. And it's perfect properties. That is the number one reason why silicon dominates integrated circuits today compared to germanium, compared to gallium arsenide or any of the other semiconductors. And these are some properties here listed in this bullet list of silicon dioxide that are quite desirable. It's easy to selectively etch silicon dioxide and to pattern it. It also masks the diffusion of a lot of common impurities, which is a really great thing. You can put down SiO2 and pattern it, and some impurities will not diffuse through it. It has great electrical properties. It's a very good insulator, at least when it's thick enough. It has a high breakdown field. It tends to passivate junctions very well. It has a very stable and reproducible interface. For any of you who have ever tried to make oxides of this quality on other semiconductors, like germanium or gallium arsenide, you immediately appreciate the properties of silicon dioxide because on those other semiconductors it's 10 PhD thesis to try to get a decent passivation of those semiconductors. So silicon is really-- we were very, very lucky coming upon silicon-silicon-dioxide interface. It's really unique. On slide 15, this is a little bit dated, but this just shows some of the uses of silicon dioxide in IC technology. And there are two columns here. On the left, I'm showing thermally grown oxides, and we'll talk about that process in the next few lectures. On the right are deposited oxide layers. These are put on the wafer, rather than growing and consuming the wafer, by a process called chemical vapor deposition. And we'll talk about these a little bit later in the class, in the course later on. And these deposited oxides are different. They're usually not used for layers below about 10 nanometers. It's harder to control. And the properties of the silicon silicon dioxide interface for a deposited as opposed to a grown oxide are not nearly as good. But you still need them. You need them in the back end between metal layers, and you need them to do masking. So they're very important. But we're going to focus now on thermal oxides. And this just shows thickness and the type of oxides that are used. The thickest oxides are used in the field. The field surrounds the active device region. So it isolates individual devices. Thinner oxides, like 100 angstroms, are used for gates or pad oxides. In fact, I put a little arrow here and showed it going down. In fact, gate oxides today in the highest performance devices are quite a lot thinner than when this textbook was written. So they're actually more like 20 angstroms. So they're actually down in this range of tunneling oxides. And then finally, the very thinnest oxides are those chemical oxides that result from RCA clean, as we talked about last time, that are grown chemically rather than in a hot furnace. Let's go on to slide 16. This is just-- the quality didn't come out all that well. But this is a high-resolution cross-section transmission electron micrograph. And what we're looking at here on the top is polysilicon gate. This very thin layer right here, labeled gate oxide, is 1.2 nanometers thick, or 12 angstroms. That's the gate oxide that's been thermally grown. And at the bottom is the silicon. And in fact, it's the formation of these gate insulators that really is probably the most critical application of the process of silicon thermal oxidation in ICs today. We'll talk about thermal oxidation to grow thicker oxides, field oxide, et cetera, which is, of course, of interest. But in terms of the most critical application of thermal oxidation, it's really to try to form the gate insulator. However, remember we said that silicon dioxide is a perfect insulator if we go on to slide 17. There's some basic physics, though, that we can't get around, which has nothing to do with silicon dioxide. It's just the fact that there's something called quantum mechanical tunneling. If you have two regions, one on the left and one on the right, in the middle there's an insulator. There's a barrier, an energy barrier. If you've taken quantum mechanics, you know that when that energy barrier becomes thin enough, depending on its height, but when it becomes thin enough, there's a finite probability that electrons will find their way through that barrier. They will tunnel through. They don't go over. They go through. And this isn't a problem when you're thicker. But when you get down to this regime, for example, below about 30 angstroms, you can have direct tunneling right through from the gate electrode into the channel. And this constitutes a gate current, which, again, the whole idea of CMOS is that there's no current through the gate insulator. That's not true anymore. There is current flowing through these gate insulators because they're so thin. And this is just a diagram I took from an older paper now. But on the vertical axis is a log scale, shows the gate current density. So that's how much current is going through that insulator as a function of gate voltage. As I increase the gate voltage, of course, that goes up. And the different parameters shown here are the thickness of the SiO2 layer. So here in the bottom, it's showing for 36 angstroms, you see. And you can see it goes up exponentially. Below about 30 angstroms, we're going up exponentially. Here, I'm only going from 29 to 25 to 20 angstroms. And we're going up five orders of magnitude in just five angstroms in the amount of current that flows through that oxide. So it's an exponential process. It's very, very highly dependent on [INAUDIBLE].. As we go thinner, as we go here to, say, to 15 angstroms, really getting quite a bit of current tunneling through that. And that's a major issue in modern technology. So as a result, if we go on to slide 18, what are people doing? Well, you have a couple of options. One is don't scale the oxide. Don't make it so thin. Stop scaling it. Well, it turns out that's an issue because we need to get higher current drives. And one way we do that is to make the oxide thinner. So if we want higher performance, we want to make the oxide thinner. On the other hand, the gate is no longer a perfect insulator, the gate insulator. So there's another idea. To achieve higher current drive, well, we need to increase the capacitance. Usually, we just decrease t-ox in this formula shown on the slide. Well, the other thing you can do is increase the dielectric constant, k-ox of the insulator. So that's the name of the game people have been looking at and doing research for the last five or 10 years. What other insulators can we make besides SiO2 that have all the great properties, but they have a somewhat higher dielectric constant, k nu, which is shown in this formula here. So I try to increase the dielectric constant by-- one way is by adding a little nitrogen to the insulator. And people use oxide nitrides, or you might go to a whole new material. So what people refer to is for the same gate capacitance, what we typically do is we define an equivalent gate oxide thickness called t-ox equivalent according to the formula on the bottom. So what is t-ox equivalent? t-ox equivalent is equal to just k, the dielectric constant of silicon dioxide, which is at 3.9, divided by the dielectric constant of my new gate insulator, my high k. So let's say you went from 4 here to 8 times the thickness. So what that means is, basically, you can actually increase the thickness, go to a slightly higher thickness of the high k and get the same capacitance. So I'm upping k here, k nu, so I can increase thickness, which means your quantum mechanical tunneling is going to go down. You'll still get the high capacitance and the high gate drive. So it sounds like a perfect, from a mathematical point of view, physics will look at this, say, perfect. Increase k, increase t, everybody's happy. The problem is it's hard to come up with a material that has a higher dielectric constant that has all the perfect and ideal properties of the silicon silicon dioxide interface. In fact, such a material has not been discovered yet despite millions and millions of dollars that's been spent in trying to comb the periodic chart for such a material. So if we go on to slide 19, let's look at what some of the future projections are for the scaling of the gate insulator. And I should say gate insulator in this slide because we're not sure-- it won't be SiO2. It'll be an oxynitride with a higher k or maybe a new material that is yet to be figured out. And what I'm showing here, each column corresponds to a different year, just like we usually do on the ITRS. And what I wanted to focus on is a couple of interesting rows. This row right here, if I highlight this row, that says equivalent physical oxide thickness for microprocessor units. So this would be for high performance. So this is t-ox equivalent. In 2004, for high-performance devices, it's supposed to be about 1.2 nanometers, or 12 angstroms. So the high-performance technology is supposed to be about 90 nanometer technology, 12 angstroms. That's how thin we are. And look at the current. The gate dielectric leakage, the maximum current tolerable in such a device is about 170 nanoamps per micron. So it's pretty high current. So that's for the highest performance device. And if we go out in that same row and we work to higher years, look what happens. By 2006, we end up in yellow. Yellow means people have an idea how to do it but maybe not totally manufacturable. But there's some ideas. And if we go beyond that, 2007, it's all red. So we want to achieve equivalent oxide thicknesses of 0.9, 0.8, and 0.8 by 2008. People just don't know-- don't really have the right material at this point. So there's a lot of research in this area. So what people do instead, we haven't figured that out yet, is they make different types to reduce the total amount of power that's burned on the chip. They make different types of devices. You have a very high-performance device that's quite fast. But it has reasonably thin physical gate oxide thickness. But then you can also, on the same chip, going down here in this region that I've circled down here where it says equivalent physical oxide thickness for low operating power. So this is a device burns less power. Notice, the oxide is thicker. It's 1.5 nanometers instead of 1.2. And the maximum tolerable gate leakage is on the order of 1, 1 nanoamp per micron instead of 170. So this device is supposed to be lower operating power. You make it a little thicker, and you change the VT to achieve that. Finally, if you go down to this column for the low standby power device, it has even thicker oxide. So on the same chip you might have a 2 nanometer, or 2.1-- 21 angstrom oxide. And the leakage current much lower through the gate for this device, about three picoamps per micron. So three orders of magnitude lower than for the other device and five orders of magnitude lower than for this. So quite a bit different on the same chip. This is the solution people are coming up with. Only the fastest devices have to have the very thinnest oxides. Nevertheless, in a couple of years, we will be hitting the red brick wall. So there's a lot of work to be done on alternative gate insulators. Let's go to the next slide. I've also taken this from the 2003 ITRS. It's a little bit complicated and not the most pretty plot in the world. But what it does show is a couple of interesting things. It's a plot of the leakage current density, JG, through the gate. So this is the current going through the gate oxide as a function of year on the left axis. And you can see that this red line represents the JG limit. This is what device designers would like. No higher than this amount of gate current is what they can tolerate for a high-performance device. This black curve is actually what people think, a dielectric called oxynitride. So this is not SiO2, but it's SiO2, SiO, where a certain amount of nitrogen has been added. And you can see the gate current on the left that is being projected. It looks something like this black curve. It's increasing exponentially. And somewhere around 2006, 2007, these curves cross. So beyond that oxynitride, you cannot stay on the same scaling curve. It won't satisfy the gate leakage current. It will leak too much through the gate. So beyond 2007, it's not clear if people can use oxynitride. So they have to come up with something else. This brown curve up here that's marked EOT, the equivalent oxide thickness, it's referenced to the right-hand vertical axis, which shows in angstroms the equivalent oxide thickness. So this is just exactly the data I showed you going down. So starting in 2003, we have an EOT of 13 angstroms on the right. And it's going down and down so that by 2007 or so here, the EOT is nominally 9 angstroms. So this is the way people would like to scale. But clearly, in the next few years, oxynitride is not going to work. You're going to need some other material, like hafnium oxide or one of the silicates that people are studying. And so one more-- go to the next slide, on slide 21, one more thing I just wanted to point out about alternative gate dielectrics is-- this I took from IEDM, International Electron Devices Meeting publication from about two years ago. There's been a lot of data. I don't mean that people have searched the entire periodic chart, but there's a lot of work that's been done by different people to study different materials. And this is just a way of summarizing them. One parameter people look at is the current density that leaks through the gate as a function of the EOT, equivalent oxide thickness. And I won't go through it in detail. But just the point is that a number of different materials are under consideration. I think right now the most popular are hafnium oxides and hafnium silicates, HfSiON. But this is an example of a topic, if you're interested in doing some of the library research for your final report, of a topic you might want to consider looking at is high k dielectrics. What some of their properties are and how they're formed is an interesting thing to study. So given that background information, I want to go on to page 22, slide 22 here. And even though I've just finished telling you that in the future pure SiO2 probably won't be used, it's still the most important-- there's nothing that's ever beaten it, so it's the standard. So we're going to study the process of thermal oxidation in this course. And this shows schematically, very schematically the basic process. We have an oxide that's growing at a high temperature on a silicon wafer. And what happens is an oxidant, such as oxygen or water, diffuses through that oxide, has to react at the surface. And new oxide is formed right at this interface right here. So we have a chemical reaction taking place at the interface. These are examples of the chemical reactions, silicon plus oxygen going to SiO2, or silicon plus moisture going to SiO2. These are the reactions that take place at that interface. And we'll spend, not this lecture, but probably next lecture talking in detail about mathematical models for that process. But rather, before we get into all the mathematical detail models and the Deal-Grove model, there are a few basic properties about the process we should think about, that, in fact, when the silicon surface is oxidized, there is a volume expansion associated with that. And this upper schematic picture shown here in the upper right is something I took from chapter 6, figure 6-39. What it represents is a cross-section in depth of the crystal, where these open circles here are meant to represent the silicon atoms in a lattice. This interface right here is meant to represent the silicon silicon dioxide interface. And this upper region here that has both black atoms, which are oxygen, and the white open circles, which are silicon in it, that's the SiO2. So here's the interface. So what it shows schematically is that, in order to oxidize this surface, what has to happen is the silicon bonds have to be broken because the silicon is bonded to each other. They have to be broken. Oxygen, these little black atoms, have to be inserted in between the silicon to form SiO bonds, such as shown here. And so the volume, that whole area has to expand to a certain extent because of the room taken up by the oxygen atoms. So there is a volume expansion associated with it. And the lower cartoons show schematically what that expansion looks like. So let's say I take a unit volume of a substrate that's a cube, that's 1 by 1 by 1 cube in all three dimensions, shown schematically here on the left. And just take that square that cube and expand it 30% in all three directions. So this is a bigger cube. This blue cube in the center is bigger by 30%. So this would be if it were unconstrained. There were no constraints, just in free space. But actually, the substrate restricts the expansion to one dimension, right? I can't just take the cube and expand it in the xy plane because I've got the substrate holding on to it. So in fact, the expansion only takes place in the vertical direction. So for every one unit consumed-- silicon consumed in the vertical direction, we have basically a height here of 2.2 units when it actually grows the SiO2. So the substrate restricts this to one dimension. So what does it look like? What does an oxide look like grown-- especially an oxide grown in a complicated two-dimensional fashion? We're going to spend a lecture talking about this. But this is a pretty picture. It's a scanning electron micrograph of something called LOCOS. I think the first lecture we talk about local oxidation of silicon, where we would put a nitride mask down on the chip where we don't want the oxide to grow. And everywhere outside the nitride mask is where the field oxidation takes place. So this is an actual example of field oxidation that's taking place and what it looks like. This rectangular bar marked the location of the silicon nitride mask, that's the region on the chip where the nitride was there protecting the surface against oxidation. The gray region down here is the silicon substrate. This region marked SiO2, you can see the volume expansion. It's expanded compared to-- the original silicon surface is shown by this white dashed line because we know oxidation involves the volume expansion by about 2.2x. So this occupies about twice, 2.2 times, the volume of the silicon from here to here that was actually consumed. So you're consuming the substrate when you're doing this. Interestingly, though, in two dimensions and these three-dimensional structures, stress plays an important dominant role. We're going to talk about that in this course. So look at this little bird's beak. They call this a bird's beak that's squeezing out like this because it looks like a bird's beak. And sometimes they call this the bird's head. You have to have a good imagination in order to understand that. But the bird's beak shape is actually due to lateral oxidation, some of the oxidant diffusing in here under the nitride mask. And also, its shape is controlled by the nitride mask pushing down on it. So the oxide is trying to grow it. Has to push up on the nitride. That takes some force. So this whole shape and the amount of lateral encroachment is very much dependent on the stress effects. And we'll talk about that in the next couple of lectures. So complex shapes can be formed by local oxidation. We'll go on to slide 25. This is a very high-resolution transmission electron micrograph. And again, on your handouts, it probably doesn't look very good. But all those little dots, you see them in rows, correspond to pairs of atoms in the silicon lattice. So these are planes here. And they're all very regular because silicon is a crystal material, right, single crystal, perfect structure. SiO2, however, is amorphous. It has no long-range order. It has some short-range order but no long-range order even though it's growing on a single crystal. So this is an amorphous material even though it's growing on a single crystal substrate. It's a glass. If we go on to slide 26, there are no crystalline forms of SiO2 that match the lattice size of silicon. There is a little bit of short-range order in thermally-grown SiO2. And in fact, if you look at this schematic on the left, this shows that there's a tetrahedral type of structure on the left. You see a silicon atom in the center and four oxygen atoms bonded to it nearby forming some sort of tetrahedron. These tetrahedra, though, are not actually bound in any real structure. They may form a ring structure a little bit by sharing oxygen atoms. But you notice, this lattice does not repeat itself throughout space to form some single crystal. So there is short-range order in the tetrahedra. But it is an open network, and it doesn't have long-range order. So let's go on to slide 27 and just talk for a few minutes about stress and oxide. We're going to spend a lot more time talking about it in the next lecture but just to make the point that when oxide layers grow on silicon, they're under compressive stress. And it's for a couple of different reasons. But basically, because the growing oxide is constrained by the interface that it's growing on it's constrained by the substrate this interface right down here. So it can only expand upward And this gives you an idea, 5 10 the ninth dynes per square centimeter for the amount of compressive stress that this oxide is feeling when it's grown on the wafer. Now, at high enough temperatures, say, if you put it in the furnace to oxidize above 1,000 or anneal it, the oxide can relieve some of the stress. The oxide can actually flow a little bit by a viscous flow. But that's only at a high enough temperature where the glass starts to soften a little bit. At lower temperatures, it's just too viscous in order for it to flow. So the stress stays built in. So there's these intrinsic stresses just because of the volume expansion. There's also differences in the thermal expansion coefficients of SiO2 and silicon. And that leads to a certain amount of intrinsic stress. Both of these effects are going to put the silicon in tension. So the silicon wafer at the surface is sort of being pulled by the oxide, which is being compressed. So if you were to grow an oxide on both sides of a wafer and strip it off the back, the wafer would actually be slightly bowed, very so slightly, nothing that you could see by eye. But you can actually measure it in a special laser apparatus called a wafer curvature measurement. So there are ways of actually detecting the amount of stress just by measuring the curvature of the wafer when you remove the oxide from one side and leave it on the other side. That's how people determine some of the stress. Let's go on to slide 28. This is a very schematic, and I apologize for this cartoon. If you've ever actually been in a fab clean room and you know what an oxidation furnace looks like, it doesn't look exactly like this. But it's an artist's conception. The equipment is actually relatively simple in concept. It's essentially a quartz tube, which is shown here by this black line. So it's a cylinder. It's a long quartz tube into which one can push a boat of a quartz carrier with wafers standing up in it. So all these wafers are put vertically in the boat in little slots, and you have a whole bunch of them sitting right next to each other, within a few millimeter of each other. You have the whole stack of wafers. So you can oxidize 25 or 50 or 100 wafers in a furnace at a time. You essentially just flow in oxidants, like oxygen and hydrogen, to create water vapor and moisture. You heat the whole thing by resistive [? heater. ?] So the quartz gets very hot, and the wafers get very hot. And then the oxidant goes out the back side. So it's relatively simple. In practice, though, there are some modern furnaces or oxidation systems that are a little bit different. They also have vertical furnaces these days, where the wafers sit vertically like this to help avoid warpage and things like that. There are rapid thermal oxidation systems and fast-ramped furnaces. In fact, I think on the next slide, slide 29, there is an example nothing like an oxidation furnace, but a system in which people do grow SiO2 and oxynitrides by a process called rapid thermal processing or rapid thermal oxidation. This is a photograph. I took it off the Applied Materials website. It's a little bit fuzzy because I blew it up. But this is a particular system called the Centura rapid thermal processing system. And it's a single wafer system. And you can see it's like a clamshell. The clamshell has been opened up. A wafer goes in, sits here. In fact, you can see where a wafer might sit. The clamshell would be down, and the infrared lamps-- you can see them glowing-- the infrared lamps heat up very rapidly, heat up the wafer within a few seconds. It's in a little quartz chamber where there's flowing oxygen or whatever. Heats it up. The reaction takes place and then cools it down. So this is a so-called single wafer rapid thermal processing, a method of making oxide. So you don't have to necessarily do gate oxides in an old fashioned furnace anymore. You can also do it in these new pieces of equipment. So I want to go on to slide 30 now. And I want to talk a little bit now about a little more theoretical things, which is to talk about the silicon silicon dioxide interface. This is important from the point of view of understanding electrically the quality of what you've produced. And it does feed into how we actually do the processes to grow thermal oxides. So this is a cross section here in pink. On the bottom is supposed to be the silicon, and the white region is the SiO2. And back in the 1980s, Bruce Thiel suggested that this picture of electrical defects that might exist, the evidence of which had been observed experimentally. And so there are different types of charges here that we're going to talk about, Qit, Qf, Qot, and Qm. On the next slide, I'll define those. One point to make, though, is that to first order the interface is perfect. We're going to focus on defects because we're all neurotic, and we're interested in defects. But just to stand back for a moment and think, it's perfect in that only one in about 10 to the fifth atoms has a defect. So if you were to go onto that surface and walk around at the interface and just count the number of atoms that where there was an unsatisfied or broken bond, you would count 1 for every 10 to the fifth. So that's pretty darn perfect. It's still not perfect. It's not as perfect as we'd like to make it. We always want better. But it's pretty darn good compared to any other semiconductor insulator system. So as a result, the defect densities at the interface we measure are in this range per square centimeter, 10 to the ninth to 10 to the 11th defects on a per square centimeter basis. Remember, the surface atom density is about 10 to the 15th. So again, it's about 1 in 10 to the 5, something like that. So let's go on to slide 31 and talk about these different charges. So first, there's basically these four types of charges associated with the insulator itself or the semiconductor insulator interface. The first one you may have heard of called Qf, or Q sub f, it's the fixed oxide charge. It is represented here by these little positive plus signs. It's a sheet of positive charge, and it's supposed to be within about 2 nanometers of the interface. So you might ask me, well, what happens when you make an oxide that's less than 2 nanometers? Well, that's a good question. What happens to the fixed charge? So it's not it's not exactly clear. But the fixed charge, it has a very unique characteristic to it, and it's always positive. And we'll talk about why people think that is. And you can reduce its concentration by heat treating the oxide after it's grown, heat treating it at a certain temperature. So we put into our recipes after we grow an oxide very often an anneal to reduce the fixed charge to as low as it can go. That's Qf. It's always positive. And it's a fixed number. It's fixed in the sense of, after you've done the process, you have the chip, depending on how you bias the chip, it doesn't change. That's not true of Qit. Interface trap charge, Qit, can be either, as pictured here by the little x, x meaning we don't know its sign, it can be either plus, it can be neutral or negative. And it may change during normal device operation as you change the bias because electrons or holes can be captured. So it has a behavior similar to bulk deep levels, the way iron and nickel can trap electrons and holes, the way that's discussed in chapters 1 and 4. So Qit is not fixed. It changes with time depending on how we bias. And we'll talk about how it changes with bias. Third type, QM, mobile charge-- well, as the name suggests-- these ions are mobile. That is, they can move in the oxide from the top surface to the bottom surface, the bottom interface. They can move up and down upon application of electric field. They're ionized. So if you heat the oxide, say, to 100, 200, 300 degrees and you put electric field on it, you can actually cause these sodium and potassium to move from here to here and back. Mobile ions are less important in modern manufacturing because clean rooms are so clean. People are never allowed to touch anything. Unless you're in a research lab where people are sloppy or something like that, you don't really-- you might worry about mobile ions. We do worry about them, of course, because here at MIT we're in a research fab. We have a lot of students. But in a real fab, people, in manufacturing sense, they're very, very careful about mobile ions. So usually, the mobile ion constant charge is relatively small unless you've made a mistake. Fourth type, Qot, oxide trap charge, this is represented right here. It's in the bulk of the oxide. Notice, it's drawn up here. It's not at the interface. It may be created by other processes that the oxide was subjected to after it was grown. The oxide might have been subjected to plasma etching. The plasma could put traps in because the plasma has energetic ions going into this thing. Or it may capture electrons that are holes that are injected into the oxide during device operation. So the energetic electrons may be coming along the surface and, boop, get popped up by some scattering event and create oxide trap charge. So it's in the bulk. And it may depend, not only on how the oxide was grown, but what happened to the oxide afterwards. So those are the four characteristic types, and we can see their signature by doing certain types of electrical measurements. So let's take a look at slide 32. There are a lot of different measurement techniques-- and your book talks about them-- to understand the properties of oxide. You can do physical measurements. A very simple physical measurement any of you can do in the lab, take an oxide and etch it with hydrofluoric acid, and measure its etch rate. And it turns out, the etch rate in Hf, so how quickly it gets removed by Hf is a function of the density and the index of refraction of the density oxide. So it gives you an idea of the density. You can do scanning electron micrograph or AFM, atomic force, to see how rough the surface is. A very common measurement is called ellipsometry. This is a technique where you put a laser beam into the oxide, and you look at its reflection. And you measure-- actually, it's a polarized beam. You measure the shift in the polarization. And it's a very nice method of measuring accurately the thickness of the oxide and its index of refraction, which tells you about the density. So almost every fab, and even the research labs, have ellipsometers, where you can take your wafer. You'd be able to take this wafer here from Intel, stick it on the ellipsometer. It would come back and say 5,000 angstroms or something, or a 532 angstroms. It can tell you the oxide thickness. What we're going to talk about a little bit here is electrical measurements. And some of you may be familiar with these, but there's a measurement called a capacitance voltage technique. It's probably the most powerful measurement technique that you can subject an oxide to. If you look in the text, section 6.4.3, there's two or three pages on it. I suggest you want to read through that, especially if you're not familiar with CV because it has a lot more detail than what we can do in the lecture. So let's just go through some of these CV measurements. If you've had basic courses in electronics, you'll be quite familiar with this. And it'll be boring and a review. If you haven't, it'll probably seem a little bit mysterious. But bear with us. And again, if you go back and you read through that section, it hopefully will help. The main point it will make, though, hopefully is that, by doing a simple measurement, which doesn't cost that much time or money, you can learn a tremendous amount about how that oxide was grown and what happened to it in its life. So I'm going to talk about a technique called high frequency capacitance voltage. And in order to do this measurement you do need to make a device, a very simple one. In fact, it's the simplest device probably that you can ever make on silicon. What you have is you take a silicon wafer. In this case, it's n-type. It's doped with donors. And you grow an oxide. You take it into the clean room, put it in at 900 degrees in an oxygen ambient for half an hour. You grow a certain oxide thickness. You put metal down, just evaporate aluminum and pattern into little dots. And then you make a contact to the back of the wafer, and you put a probe down the front of the wafer. So relatively simple measurement. We do this here at MIT in a class called 6.152J. Very simple. You make an MOS capacitor in the lab. This is physically what it looks like. It's got silicon, oxide, and metal. This little symbol in the middle is meant to represent from the circuit engine or the electrical engineers or physics point of view what it looks like. It's a capacitor. It's two electrodes with contacts on either side. And what I'm showing in this plot here is the capacitance on the y-axis as measured as a function of the gate voltage. So the amount of voltage put on the gate is referred to as that little aluminum dot on the surface of the wafer. And we've biased this particular wafer in such a way that it's got a positive bias on the aluminum dot. In that case, electrons from the substrate will be attracted to the surface. And we do what we call-- we accumulate the substrate, the surface. The primary majority carrier in the silicon is electrons anyway. But I get more of them right at that silicon-silicon-dioxide interface. I accumulate them. And so if I were to measure the capacitance of this thing, it looks just like a parallel plate from basic physics. I have aluminum. I have a dielectric. I have another sheet of electrons. Acts sort of metallic. That's called a capacitor. The capacitance you measure is the dielectric capacitance related to the dielectric constant and the thickness, c-ox. So as long as I bias it was sufficiently positive voltage, I get a constant. The capacitance is independent of the gate bias. So that's accumulation. Now we start taking the gate bias and start making it negative. So this vg now goes to 0 and then to a negative number. Well, when that happens, we get something actually calling depletion. And depletion, as the name suggests, means that the surface region is now, instead of accumulated, it's depleted of free electrons. The free carriers are now pushed away from the surface, shown sort of schematically by this arrow, the electrons going away. And we form a region here of a certain thickness, x sub d, where there are no electrons. There are no majority carriers. It's depleted of free carriers. And this x sub d region is called the depletion region. It has a certain thickness. And in fact, the thickness of that depleted region, the thickness of xd depends on the bias. As I make it more negative, xd grows. In fact, what we have electrically, if you want to look at the circuit schematic, we have two capacitors in series. We have an oxide capacitor from here to here. And we have a depletion capacitance in the semiconductor, which is a variable. And it depends on the voltage. So in fact, if you measure the capacitance, it starts to go down. And you're sweeping out a curve as you make it more negative. So if you go on to slide 34 and you continue on making it more negative, you finally hit a point called v threshold. And right at that point, you're at something called inversion. And in fact, what's happened, you've put so much negative charge on the gate that you actually start getting positive holes attracted to that interface, to that top interface. So you've now formed an inversion layer. If you're a circuit design person, right at VT, that's where the device turns on for a MOSFET. That's where you've just formed a conducting channel. This bright red region here is the holes in the inversion layer. So in an n type semiconductor, in inversion, you have a p type in a layer of holes at the interface. This xd, which I said was expanding, it stops expanding. It reaches its maximum. And all of a sudden, you get a constant capacitance. You notice the capacitance curve now is flattened out, which is just a series capacitance of this c-ox and this c depletion region, where the depletion region is maxed out. So you trace out a very simple looking curve, which we'll see how the properties of this oxide affect this curve. But the basic idea is that all regions, no matter where you're operating this device, the amount of charge that's on the gate here, on the aluminum, has to be balanced by the charges in the substrate. And the charges in the substrate will be either the depletion charge consisting of the charge associated with the donors in the depletion region, or the inversion charge, which is also positive, which are these holes in this region. So that's kind of a basic charge neutrality overall that has to hold. So let's go on to slide 36 briefly. This is a little more detailed solid state physics than most you need to get into. If you're familiar with it, again, it'll be a reminder. If not, you have to take it as a truth. The question is, once I've formed this inversion layer and I have this Q sub I and I add additional charge to the gate, it's balanced by more inversion charge. You might say, well, why doesn't xd just keep growing? People always wonder about that. Why do we always get more inversion charge instead? Well, it turns out it's just a lot easier at that point to create inversion charge, to create these holes at the interface. In fact, if you're taking a solid state physics class, you know that that hole density looks something like this. In fact, it has an exponential dependence on that voltage, on how we move that potential up and down. So exponential means we only have to move the potential just a little bit, and we can get exponentially more carriers. So it's a lot easier to create this inversion charge at this point than it is to create more depletion charge. So that's kind of a reason why that happens. If you're familiar with it, that'll just remind you how that works. So let's go on to slide 36. And now, I've traced out-- again, this is a very schematic sort of view. But I've traced out a plot of capacitance on the vertical axis. And the x-axis is DC gate voltage. And we just traced out for you this black curve, where we started at c-ox, a constant number. As I sweep down through 0, I go in depletion. And it goes down, and then the capacitance reaches a minimum. And that's what an ideal high-frequency capacitance curve would look like. Now, basically, so what happens when we go to a lower frequency? Well, I have another curve here where we swept through this region, through depletion. And then all of a sudden, the curve, right after threshold in inversion at low frequencies, the capacitance goes back up to c-ox. Well, it's because if I have a signal that's going at low enough frequencies, the inversion layer carriers can actually follow that signal. So instead of just getting this minimum capacitance of these two in capacitors in series, it actually goes back up to look like a normal parallel plate capacitor with capacitance c-ox. In fact, if we go to the next slide. I think it explains it a little bit more physically on slide 37, the different regions. So in accumulation, remember, we said we just had a regular parallel plate capacitor, capacitance c-ox. I wiggle charge here on the metal gate, and you wiggle a little bit equal amount of charge in the substrate. In depletion, what's happening? You're wiggling charge on the gate. And instead of accumulating, you're actually depleting. And you're wiggling charge at the back end of the space charge region, which is sweeping out. And that's why the capacitance goes down. Now, if I'm at low frequency, and I'm inverted, I've created this inversion layer here of positive charge. If I wiggle charge on the gate, it's low enough frequency that I can actually wiggle charge in the inversion layer. However, if I'm at a bias where I'm in inversion but I'm doing it a very high frequency, say, megahertz or 100 kilohertz, the inversion layer can't respond that fast. But the depletion region actually can. So you actually get the series capacitance of both the inversion layer and the depletion layer. So this is all reasonably well understood if you want to read in [? Peres ?] book, it has a little more detailed explanation of these different regimes or regions of the curve. So there's another region that tells us something of the CV curve, which is going to end up telling us something about the quality of the oxide. And it's called deep depletion. And I've marked that region here on a high-frequency curve, where it actually, instead of being flat going straight across, it actually starts to go down. And what we've done in deep depletion-- remember, in this regime we have two capacitors in series, the oxide and the depletion region-- is that we sweep the DC voltage very fast so that the inversion layer carries can't follow it. So xd ends up expanding a little bit. And in fact, if you see deep depletion, it's a sign that you have a very high minority carrier lifetime. You have very few traps. You don't have much iron and things like that. So people will often take capacitors and try to intentionally sweep the CV quickly, see if they can get this lowering, this deep depletion. The faster you sweep it, the further down it goes. If you don't see that lowering, you don't have very good minority carrier lifetime. You probably have a lot of traps or things somewhere at that interface. So this is a way of getting a rough idea of what kind of quality interface have you produced? Can you deep deplete the capacitor or not? So basically, from looking at these capacitance voltage measurements, we can extract a lot of quantitative information. In fact, we can get the oxide thickness. Remember, we had c-ox. C-ox just depends on the dielectric constant and the oxide thickness. So once I get that number, I know the area, I can calculate the oxide thickness. I can get the substrate doping from the depletion capacitance. And you can get all these different interface charges, Qf, Qit, the mobile, and the oxide trap charge, all of those just by doing CV measurements in their different configurations and frequencies. We can get this information, which tells you about the electrical quality of the device. So in fact, on slide 40, what I'm showing are some quote unquote "realistic" capacitance voltage curves, much more like you would measure in the lab. What we've shown so far on a [? cv ?] plot, it looks like this. So we have this curve here. Here in accumulation we're at c-ox. It comes down and sweeps out like this. So that's the ideal high-frequency curve. In fact, that's not what you would measure in a real device. You would measure that curve but shifted over to the left. So if you look at this curve right here, you see it's shifted. And in fact, it's the high-frequency curve but shifted by two terms. And the terms look like this. There's one term that goes like the coulombic charge, little q, times Qf, the fixed charge, divided by c-ox. So this is the Qf term. This is just-- remember, Qf is positive. So it just literally takes the threshold voltage and shifts it by Qf over c-ox. So you see this shift because of this positive fixed charge. The bigger the fixed charge, the bigger that shift. And in fact, it's the shift from the ideal curve that people use. That's the way people use to measure Qf. Phi [? M S ?] is the metal semiconductor work function difference, which is just sort of a property of the metal that you use, whether you use aluminum or what you use. So there's also a shift associated with that. And that's a fixed number depending on the polysilicon gate that you used. So with this positive fixed charge shifts the CV to the left, and you get this curve. That's a good sign. Now, most of you would actually, if you were to measure your devices, you'd get a distorted curve. Instead of looking just like with this, with this abrupt drop off, you'd see it looks a little bit distorted from what you calculated in the ideal case. And in fact, this distortion is due to the fact that there is variable interface trap charge. Remember, we talked about Qit? And we said the amount of charge and Qit depends on the bias voltage? So as I'm sweeping from here from right to left, I'm uncovering or causing extra-- my ability to see, to charge an uncharge Qit. And that ends up shifting the curve by variable amounts. So it gets distorted. So the amount of distortion is a way of telling how much Qit you have. And in fact, if you look on slide 41, this is pictured for you. If you've had solid state physics class and you know about energy band diagrams and states, then this is a way of thinking about that. If you haven't, again, it's the type of thing you'll have to take with a grain of salt or study up on it. So what we have pictorially here up in this upper band diagram is the conduction band here shown of the silicon. And here's the valence band. So here's the band gap between those two. This is the semiconductor of a silicon, and on the left is the oxide. And notice, right at the interface, there's a whole bunch of little levels. Each little bar here represents an energy level that electron can occupy. And you notice that there are energy levels within the entire band gap. There are certain density of states throughout the band gap. And we're going to assume they're donor type, and they're distributed somehow. Their donor type, meaning that they're positive when they're empty. So if there's no electron in there, this has a positive charge. If it captures an electron, it's neutral, OK? So now we look at this energy band diagram in the center set of curves for different bias conditions. So we can look at it under inversion. So when the surface is inverted, it turns out that Qit is large, and it's positive. It's mostly empty. We have the bands bent. None of these have any electrons in them. So I have now a lot of positive charge at that interface under that bias condition, OK? So here I am at inversion, region C on this bottom curve. And lo and behold, I have a lot of positive charge. That means I am shifted from the ideal case quite a bit to the left. And you see why in region C of this realistic curve I'm shifted quite far over. So I'm distorted. So now let's go to region B of that curve. So that's here under depletion in the center. When I'm depleted, the bias voltage, again, is a little bit closer to 0. I don't have as much band bending. And in fact, some of these density of interface traps is now filled with electrons, maybe about half of them. So the amount of Qit that's uncovered at that interface now in this bias condition is less. So I have less interface trap charge. So indeed, if you look at the ideal curve and you look at the distorted curve, at point B, there's less of a shift than there was at point C because there's less charge at that interface, less positive charge. And now, let's finally go to accumulation. I've bent things-- the voltage now is such that all of these traps are occupied by electrons. So Qit has its minimum value. They're all occupied. So Qit is low. And so in region A, where we're in accumulation, in fact, the shift from the ideal is very, very small. It's a small shift. So as I sweep here, from inversion, I get the curve. A lot of shift to depletion, not so much shift, accumulation very little shift. So the distortion of the curve has to do with these traps that depend on the bias level and the fact that it varies with bias. So how do people measure Qit? Well, there's one way of doing it. There are a couple of different ones, but what people do is they measure a curve under high frequency, and they choose the frequency such that the traps, the Qit's can't respond. So they get kind of an ideal like curve, shown here. Or you can calculate the ideal curve. If you're a more theoretical person, you can calculate it. And then they do a low-frequency curve, and they choose low enough so that the traps can respond. And in fact, they see a difference. In a certain bias regime, you can see a difference in the capacitance. And that difference in capacitance can be related to the density of the traps that are in a certain region of the bandgap because where you are in the bandgap corresponds to where you are in bias. So you could give someone this data who is familiar with it, and they could plot then for you the density of traps in the bandgap at that interface between silicon silicon dioxide as a function of the energy in the bandgap. And you can see it has this familiar u-shaped profile. In the center, it has some number. Mid-gap it 's typically 5 10 to the 10th traps per square centimeter. As you go closer to the edges, it goes up dramatically. But this is a way by CV of measuring Dit or Qit. And in fact, we want to keep it as low as possible so our threshold voltage is moving all over the place and we're getting ideal characteristics. And what we do is, after we do an oxidation, the last high temperature step is typically some kind of forming gas anneal. And after annealing, this Dit goes down by about two orders of magnitude. So you notice the last step in a MOSFET or an MOS flow, if you've done it, is to do an annealing and form a gas, which has hydrogen in it, at about 450 or 500. And that takes this Dit from up here all the way down to here to a level that's tolerable for the device to operate. So I know I went through a lot of that CV stuff fairly rapidly, and I apologize for that. But I wanted to give you more than the details, give you a flavor for it. If you've had courses in electrical or solid state physics, you'll recognize some of it. If you haven't, you can go through in the text in more detail. But just to summarize on the oxidation, it's really probably one of the most critical processes for CMOS. Thermal oxides that are thermally grown are used for a lot of different things, tunnel insulators, masking oxide, field oxides. If nothing else, you should remember that the electrical properties of silicon silicon dioxide are the best, and they're superior to all other cases. And that's why it's taking so many years for us to find another insulator, another high k. It really is tough to beat, or it's tough even to equal it. There are charges, though. It's not perfect, remember? Nobody's perfect. 1 and then 10 to the fifth is a problem at the interface. But there are charges that exist and things like fixed charge, which is positive, Dit or Qit, mobile charge-- you don't have mobile charge if you're really clean-- and Qot. These charges do exist, but they can be characterized by relatively simple techniques, like MOSCV. So we'll go on next time and talk about the physics or the kinetics of how this process of thermal oxidation takes place. But please go ahead and start reading chapter 6. Also, your handout, homework number 2 was handed out. So you can go ahead and start working on that. That's due next Thursday. Thanks. it. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 17_Thin_Film_Deposition_and_Epitaxy_Introduction_to_CVD_Si_Epitaxial_Growth.txt | JUDY HOYT: In a polycrystalline state, or it can be grown epitaxially, which epitaxially-- "epitaxial" is a Greek word, which means, having the same crystal structure as the substrate. So silicon itself can be deposited. And we'll talk about that in the next couple of lectures. So we're going to talk about the methods, the equipment, and some basic concepts used to deposit or grow the-- these thin films. So let's just go back to a picture that I think I showed earlier in the course. But it's a cross-section view of, say, devices in a CMOS chip. And we've got our silicon substrate down here at the bottom. We've talked a lot about how to create n-wells and p-wells by ion planting, and diffusing, and the source drains, and all that. But there are a lot of other films here besides silicon substrate. So everything that's in a color, other than white here, is some other film. For example, here, we might have copper that's used in the interconnect wiring, or aluminum, or tungsten. The-- shown here in this bright magenta. These hatched regions that are blue could be a low temperature oxide. Now, it's a deposited oxide, very different from a grown oxide. So we're going to talk about how we deposit oxides by a process called chemical vapor deposition. Or it might not be oxide. It could be generally referred to as an interlayer dielectric in between some of the metal layers, and that has a low dielectric constant. And we'll talk about that in a little minute, why we want that. Other films that are absolutely critical for the MOSFET itself. We haven't talked about it. We've been talking about polysilicon gates. So far, I just draw them in in PowerPoint just by slapping them in there. But they don't get slapped in PowerPoint. They have to be deposited. So we'll talk a little bit about the deposition of polycrystalline silicon, which is critical, the deposition of the materials that are going to form the spacers, which is either this bright green silicon nitride, again, deposited, or it could be low-temperature oxide. So there are a lot of different materials that we want to talk about, and how they go down. Besides how they go down, there are some requirements, or goals, if we go onto slide number three, for thin-film deposition. We need to be able to control the material composition. So either a stoichiometric SiO2, for example, or a silicon 3 N4 for silicon nitride, or it could be a oxynitride, a mixture of silicon, oxygen, and nitrogen. But we need to be able to have a process that reliably controls the atomic composition. We would like to have low contamination in general to-- in order to control the electrical and optical properties of this material. And good electrical, and mechanical, and optical properties, generally. An obvious-- somewhat obvious thing, but very important, is we need to control the thickness uniformity across the wafer and from wafer to wafer so that when we go to etch it, or whatever, we know what the unit-- we know how-- what the uniformity is, what the thickness is, at any given point on the wafer. There's something called step coverage, which I'll give you an example in the next couple of slides of what we mean by step coverage. We often want conformal deposition. And we'll give illustrations of that. We want to be able to fill spaces in between lines. We don't want to have voids, usually. And there is a need to planarize films. That is, create-- we have a topography that often has a lot of ups-- hills and valleys. We often need to planarize it to give a smooth, flat, top surface. It turns out that certain deposition techniques themselves tend to be self planarizing. And we'll talk a little bit about that. So these are some requirements that people worry about when they're talking about coming up with a deposition procedure, or process. Let's give some examples of these issues, some of the ones that may be not so obvious here on slide number four. What is step coverage? OK, well, step coverage is when we have a step that exists. So let's say, I already have this step here. You see this-- there's maybe a material here, this metal line, and then we have a step going over it. It could be an oxide. And now, I want to put a metal down. So I'm going to either evaporate it, or I'm going to sputter it, or somehow put the metal down. But you can see, here, I'm getting very uniform step coverage. Regardless of when I go over the step, I get the same thickness and conformality of the film. That's good. That's uniform step coverage. This is what's called poor step coverage. What does that mean? Well, right at the point of the step, you can see that the film thickness is, with respect to the surface here, is actually less. It's thinned down here. So that's poor step coverage. It's not conformal. And so the problem with this, then, is if your step coverage degrades at some point, this can be a weak spot in the metal line. And you could get an open. So step coverage is an important property of the type of deposition. So this would be considered poor step coverage. How about a filling issue? OK, well, what we mean by this is this is an example of metal that has been-- that shows good filling into a via. So here, I have a flat surface, I have a deposited oxide, and I've etched a hole. And I want to deposit the metal onto the surface and fill the hole. So that's good filling of the via. Here's an example where we were trying-- we had metal to start with, these two metal lines. They had a certain height, and width, and spacing. And we were trying to deposit oxide over them. And you notice, we did not fill the space completely in between the two metal lines. We end up with a void. So that would be kind of a poor fill. We have voids. So we not only have to fill vias, which where we've etched, this is a via, where I've etched into a film, but we may have etched features that we're trying to cover up. And we want to fill the space in between them uniformly. And then here's an example of-- here, the third example on the right is showing, trying to etch, or rather fill a via where we have poor bottom filling. Here, we get a nice metal film on top. The metal is not very well uniformly deposited on the edges. And the bottom doesn't fill very well at all. So there's an example of poor bottom filling. So this particular methodology, we'll talk about how this happened. This probably happened in some kind of evaporation or some type of line of sight type of deposition. So depending on how we do the deposition, we can get different quality of filling. Those were cartoons. Here on slide number five, these are some actual micrographs. These are scanning electron micrographs taking in an SEM microscope. I took these directly out of your text. The one on the left labeled A here is an example of poor metal step coverage. This is a sandwich layer, where Thai tungsten was put down, aluminum, then aluminum, and then Thai tungsten. So it's a tri-layer stack deposited by sputter deposition over an oxide step. So this is the oxide step here right at this point. And you can see the tri-layer stack here, it's kind of hard to identify the different layers, but it has this sort of-- this sort of thickness over here, and a certain thickness over here. But it did not cover the step. And the PowerPoint, unfortunately, this didn't scan all that well. But you can see that the layer, the metal layer, is thinned down where it goes over the step. On the right-hand side, these are a series of metal lines. Each one of these is a metal line going into the board. And oxide was deposited over these metal lines by chemical vapor deposition. And there were very narrow spaces between the lines. The spacing is less than a micron, maybe half micron. And you can see in the deposition of the oxide, when it gets-- the spacing gets to be quite narrow, as in the case between these two metal lines. You actually get voids. So the oxide did not fill in here. When the spacing was wider, as in this case, you didn't get the voiding. So when the net-- we get below a certain spacing. And this is a typical test structure one could use to figure out, what's my critical spacing, below which I start to get voids. And so again, this is a property-- this voiding will be a strong function of the method by which the oxide is deposited. OK, so those are some practical examples of issues with film deposition. Let's go on to slide number six. There is an important concept when people are doing thin film, describing thin films, deposition. And that is called aspect ratio. We abbreviate that in this class, AR. And the aspect ratio is just simply defined as the height of a feature. So if we have a metal line, it says, height h divided by the width of the feature, w. So a meta line, it looks like this is the aspect ratio, h over w. For a contact hole, they also speak of an aspect ratio. It's the height, or the depth, of the hole divided by its width. Aspect ratio tells you something about the topography. In general, a high aspect ratio structure is more difficult to fabricate than a low aspect ratio one. For example, it's hard to make a very, very deep-- imagine this being very deep and narrow contact hole. It has a high aspect ratio, say 2 to 1. And it's harder to etch it for one thing, and it's going to be harder to fill it, as you can imagine. As you go deeper and deeper down, it's going to be harder to access the bottom of the hole. So in one of the requirements people talk about is, what kind of aspect ratio can a given deposition technique fill? And how does it vary with aspect ratio? So that's just a simple definition, h over w. Slide number seven, I took from the-- your hand-- from the 2003 ITRS, the International Technology Roadmap on Semiconductors. Remember, in the beginning of this course, you read as a homework set, you read a couple of chapters. Well, I took this from the chapter. They have a whole chapter on interconnect. So this is table 81. And these are interconnect-- predicted interconnect technology requirements in the near term, so to speak. Near term being going up to the year 2009 for microprocessor units. If you're talking about DRAM or other types of devices, the requirements are a little bit different. But this is specifically for microprocessors. And we've seen these type of charts before. Let's just go through it here. The columns are the years. So 2004, where we are right now. Remember, this was written a little over a year ago or so. So this is a prediction going up to 2009. And each row tells us some kind of a characteristic. In this case, for interconnect, I have pointed out with the arrows a couple of key characteristics. Let's look at the number of metal levels. In 2004, that was expected on average for a microprocessor is 10 levels of circuit wiring of metal. So that's quite a bit. And you notice that number is increasing over time as we go through 2007 going-- five and seven is 11 metal layers and then going up to 12. And it's going to continue on beyond that. So we're getting higher and higher numbers of metal levels. That's one, which means we're going to have more deposition steps basically. Another characteristic that is quoted here is metal one. The metal layers are numbered, by the way, from 1 up to n. So in 2004, if you had 10 layers, they'd be numbered 1 to 10. So the metal one is the first metal layer. And you're being told here what the aspect ratio is. They use an A slash R in the ITRS roadmap. We just define the aspect ratio as being the height to the width for the metal line for copper. And its aspect ratio is somewhere around 1.7. And you see the aspect ratio is increasing over time. And in fact, I didn't show it here, but if you go to the year 2010 and beyond, it goes up to 2, a little over 2. So the aspect ratio, we need to make that an increasing number. And then look at the number-- look at the interlevel metal insulator dielectric constant, the very bottom row of this chart. I actually truncated the chart. There was a whole bunch of rows in the middle. And you can see right here where this little discontinuity, I took it out. If you want to see the full chart, you can go to the website. So look at the dielectric constant. The bulk dielectric constant of the-- of the insulator is going to between each layer of metals is actually going down. Here it is supposedly in 2003. It was around three. And you notice, we get into the yellow region, which means solutions-- people have some idea of what the solutions are, but they're not ready for manufacturing. People want to go to less than 2.7 dielectric constant. And in the year 2007, they're looking at-- would like dielectric constants less than 2.4. And that's part of the red region, which means nobody knows exactly how to do that. So just the opposite of the gate insulator. Remember, in the gate insulator, i was trying to get higher capacitance between the gate and the substrate. So I can get better charge control. So in the gate insulator that dielectric constant, we want to go up. For in the interconnect, we want the dielectric constant to go down. We want it to be lower. And these are the so-called low k dielectrics that are used in the back end. And the reason we want that is just simply if I have a big stack here, if I'm going back a couple of viewgraphs here to, say, page number two, I'm only showing a couple layers. But between this metal layer and this metal layer, I'd like to have minimum capacitance between the different lines because the capacitance leads to RC time delay in the circuit. And so-- and in fact, the RC delays, the interconnect delays are very rapidly approaching the delays associated with the devices. So making the devices faster doesn't do a whole lot of good if your interconnect limited. So there's a trend to get new dielectrics in here, all these shading here in blue to come up with new materials that have lower and lower dielectric constants. Just the opposite of what we want to do with the gate insulator here. We want to make a higher and higher one. So we're going to find new materials coming all the time. So let me skip back to that on slide seven. And again, if you want to see more of this, you just go to the ITRS. And you've been there before. And you just pick the chapter on interconnect. So besides interconnect though, I don't want to give you the idea that thin films are only used in the back end. They're actually used in front end. And that's what I'm going to emphasize in this course. So I chopped off-- I took the diagram I just showed you and I chopped off the back end to mean what we're talking about sort of front end. And there are maybe three different types of critical thin films that we're going to emphasize, because this is a course on front-end processing. So we're going to emphasize the deposition of silicon, either single crystal by epitaxy, or polycrystalline by LPCVD. That's-- and here's the poly gate. Silicon dioxide, which is also called LPCVD oxide, Low-Pressure Chemical Vapor Deposition, or low-temperature oxide, and silicon nitride, which I've shown here in green. The spacers are often made of nitride these days, or combinations of nitride and LTL. So that's what we'll emphasize in this class. All the rest above this, which I've erased, if you take 6.773, that hasn't been offered recently because Professor Reif is department head of EECS. So he's been kind of busy. But that course deals with all the types of thin films that are kind of above here. I'll talk a little bit about deposition of metals and things like that. But we won't emphasize it as much as that class does. OK, so let's go on to slide number nine. And we'll just talk a little bit, historically, about how some of these films are deposited. And the films we're going to emphasize initially today will be a deposition of silicon, either polycrystalline or epitaxially. So there's really two main types of methods that are used in CMOS. And you'll hear about them. And one thing I want to apologize for right off the bat is in thin films, one of the things I really dislike is the number of acronyms is really kind of ridiculous. And you'll see that as we go through, just this first bullet here, number one. So the first type of deposition method, and very common, is called chemical vapor deposition. And it is just like the name sounds. You have a vapor of different gases, and you deposit from the vapor phase a film on the wafer. And there are different types. People call atmospheric pressure, CVD, and that's abbreviated APCVD. Low-pressure CVD, which we'll talk about, abbreviated LPCVD. Now, we get a little crazy. There's also a plasma-enhanced types of CVD. So PECVD, where we use not only thermal energy, but we use plasma energy to break up the constituents and cause the deposition. And there's also high-density plasma CVD, or HDPCVD. So it goes on and on, the number of acronyms that are all associated with CVD. PVD, not-- maybe not so many in acronyms, but this stands for-- so the second major method is called physical vapor deposition. This is just as distinguished from chemical. It's primarily a physical process. So there's not as much chemistry involved. And the physical processes that people use, you may be familiar with, is thermal evaporation, which we'll talk about, and sputtering. These are primarily physical, have less chemical sort of-- less chemistry associated with them. So let's talk about CVD here first. This is a very old-fashioned sort of classic diagram of an example of an atmospheric cold wall, or cool wall system, that might have been used for a number of years to grow epitaxial silicon. And I put it here in quotations, "old fashioned" because it's pretty far from what modern epitaxial reactors look like today. And I have some examples of modern reactors in the next few slides. But just to give you an idea. The basic idea was this. You had a series of different gas lines. So here is, for example, hydrogen with diborane or phosphine. These could be used for dopants for boron and phosphorus doping. You had carrier gases like argon and hydrogen. Hydrogen chloride could be used to etch a-- etch deposits off the courts. And here's a liquid source for silicon, silicon tetrachloride, SiCl4. Maybe hydrogen was bubbled through that and transported that vapor into the chamber. The silicon wafers typically used to sit on a big block of graphite. And the graphite was heated not by a nichrome heater, but it was heated by induction heating. So there were RF coils. And this is why it's called cold wall, because these RF coils, the RF energy is not absorbed by the quartz, just by the graphite. So just the graphite gets hot here. And the hot graphite then heats the wafers. The chemical vapor passes over. And there's decomposition on the surfaces we'll talk about. And you grow the epitaxial silicon. That's a-- that's a relatively old-fashioned atmospheric reactor. Let's go on to slide number 10. Now, this is an example of something that you will find. That old epi reactor, you probably won't find in too many fabs today. Maybe a few of the bipolar-- old bipolar fabs. But this is a reactor that you will find in most fabs, something very much like it. It's called low-pressure hot wall system, or LPCVD. And it's used pretty much every day now for the deposition of polycrystalline silicon that is used to form the gate, or for amorphous silicon. And-- or even the silicon dioxide. This type of setup. And what it is is a regular furnace that is resistively heated. So it's a hot wall system. So the entire quartz tube comes up to the temperature of deposition. Could be 600-- anywhere from 400 to 700, something in that range. So everything is hot. And you notice what's very different from the previous diagram. I have some kind of a quartz holder here. And the wafers are standing up. They're not sitting flat. And there's a lot of wafers in here. It could be 25. Could be a hundred. But they're stacked very, very close to each other. And so you can do batch processing. Lots of wafers at once. And the whole reason you can do this, and we'll talk about that, is because it turns out the mean-free path is quite long when the pressure is low. So the gas can get-- effectively, the reactants can be transported in between the wafers very efficiently. So there's no need to spread them out over a large susceptor, as we had to do in the case of an atmospheric process like this, where we can only get two or three wafers in at a time. In this low-pressure case, you can put in quite a few wafers. The pressure is kept low in this case by a vacuum pump. So the quartz tube has an opening in the front, which is sealed by an O ring seal and a stainless steel plate. So that closes the quartz, and you can suck on it with this pump, and pull a vacuum. And you have a couple of source gases that-- with mass flow controllers that control the flow of whatever it is you're using. Could be silane, could be dichloro, or whatever. Yeah. AUDIENCE: Why is it that when the mean-free path is long, do you get the atoms to go [INAUDIBLE]?? JUDY HOYT: We're going to talk about that. But basically, if the mean-free path is sufficiently long, what it means is before-- the atom can go, or the molecule can go, a long distance before it hits a wafer and would stick. The mean-free path is very short. And I would have put it in a reactor like this, you'd get deposition all on the tips of the wafers right on these edges. But it wouldn't get down in between because the mean-free path tells you how often is it that you have a collision. If that mean-free path is very short, basically, you're going to-- the chance of you having a collision with a wafer is quite high and you'll tend to react. AUDIENCE: Is it a collision of particles? JUDY HOYT: It's a collision of particles. Right. But it also relates to the interaction of those particles with whatever you have in the reactor with the geometry. AUDIENCE: It means [INAUDIBLE] to have more shielding [INAUDIBLE]? JUDY HOYT: Well, it's not-- this is a chemical process now. Again, when we talk-- when we kind of go through this process, it may become a little obvious. It's not evaporation. You still have the gas flowing and getting in between. So it's still a gas phase type of process. We're not evaporating onto it. So the gas can still flow here in between each of these wafers. And the pressure is low enough that the reactants can get in between. And you can get nice uniform deposition. That's not the case if you do at atmospheric pressure where you need to have the wafers well separated, because you develop boundary-layer effects and things like that. OK, hopefully, that'll become a little more obvious as we go on for the rest of this lecture and the next. OK, what else about this reactor you should know? Remember, it's a hot wall. That's-- the problem with that is you get deposition of whatever you're depositing on the wafers also deposits on the walls. That can be a part-- give you particulate problems. Because you can eventually develop so much silicon, develop deposit on the walls, it's-- it'll start to flake. And then you get particles. So LPCVD is notorious for having particulate issues. So that's kind of a basic characteristics of LPCVD reactor. Let's give an example of two types of deposition. Here's on slide number 11. The first example was epitaxial silicon, single-crystal growth. And I showed you a picture of a cold wall atmospheric pressure system. This particular equation, equation number one, is, again, a little bit old fashioned using silicon tet, silicon tetrachloride. And it reacts at high temperature, and can be decomposed into silicon solid plus HCl gas, which has evolved. You can also more commonly today, rather than silicon tet, people typically use silane, SiH4. Again, in the gas phase at high temperature, it can react to form solid silicon and give and evolve off hydrogen. So this might be for epitaxial growth. The second example here, equation number three shows the deposition of amorphous silicon dioxide. So this is that famous LTO that we've been talking about, low-temperature oxide. That is deposited very commonly in a hot-walled low-pressure system, just like what I just showed on page 10. This is exactly a low-pressure CVD oxide, or LTO furnace. And the way the LTO is made is typically with silane, a gaseous silane combined with oxygen, usually in the temperature range of 400 to 500, something like that. Decomposes on the wafer surface. Both of these decompose. And they react to form SiO2 in the solid phase, and evolved hydrogen. And notice, because you are putting in silane, you don't have to consume any silicon on the wafer. So that's the advantage. If you need to put down a film, and there's no silicon exposed, then you use this low-- this LPCVD process to get-- put down an oxide. OK, there's a couple of examples. So let's look at this process now on slide number 12 a little bit more carefully. And here, maybe we'll see how the pressure comes to play in all this. Slide number 12 talks about atmospheric pressure, APCVD. And what it is is, schematically, this is the top wall of your reactor. Could be quartz. Here's the bottom wall. Here's a graphite susceptor, which is hot. And there's a wafer sitting on it. And there's a gas stream up here above. Some distance is the gas is flowing at a reasonably high velocity. And steps one through seven here are the different steps that people have identified as being involved in chemical vapor deposition. So notice how it's not evaporation. So the first step, we take the reactants that are-- that have to be transported from the main gas stream into the deposition region. So they're coming in the reactor over here on the left. They have to get over near the wafer. OK, that's obvious. It's not usually rate limiting in any way. Step number two is the transport from the main gas stream through what's called a boundary layer to the wafer surface. Now, a boundary layer exists whenever you have a static surface and a gas or some fluid flowing above it. And the boundary layer is a layer in which the velocity of the particles actually goes from the velocities in the center of the tube down to 0. Because right at the wafer's surface, the velocity is 0. So you have to somehow get through the stagnant layer. And so transport through the boundary layer is one of the critical processes. And it will be rate limiting in certain types of reactors. Then there are three surface processes here. You could lump them all together if you want and just call them surface processes. But they are called out individually here. Number three is the adsorption of the reactant onto the wafer surface. Four is the surface reaction itself, including a chemical decomposition, like what I just showed you, a reaction, if we go back to slide 11. This is what I mean by that step. So it's the actual decomposition of the gas on the surface into to-- to crack the gas, or decompose it. Could also be a surface migration involved in the surface in step four, or attachment to kinks and ledges. So there are a number of surface processes that happen. And then step number five is desorption of the byproducts. And then we have to transport the byproducts through the boundary layer. And then finally, transport of that out of the reactor, out of the deposition zone. And I've highlighted here in red steps two through five because they're most important in determining the growth rate. In fact, the two rate-limiting steps tend to be step two, which is transport the reactants through the boundary layer, through the-- which is usually typically a diffusion type of process, and a reaction-- the second rate-limiting step is the surface processes themselves, the chemical reactions usually. OK, so let's go on to slide 13 and talk about a way to quantitatively come up with a very simple quantitative model. Now, I've turned everything by 90 degrees on you because that's what was done in the text. I took this from your text. So now, be a little more careful here. The silicon wafer is now sitting vertically. It doesn't really matter if it's horizontal or vertical. You just have to tilt your head, if you want, from the last picture. And the gas here, so the gas is flowing vertically. So the gas is going up here the way the laser pointer is. It's flowing by. And it has some concentration here in the main gas stream. And this region that's labeled from here to here by these two arrows from the silicon surface to this point here is called the boundary layer. So that's the stagnant layer where the gas velocity in the stream direction is actually going down. Eventually, it reaches 0. So within the boundary layer, we typically have a gradient of the concentration of the silane, or whatever it is, the concentration of the reactant. And the gas stream is called c sub g here. When you reach the surface of the silicon, at the very surface, there's a concentration of that species called c sub s at the surface. And these two-- these two f's here, this flux f1 is a diffusion flux. So that's the flux of the reactant species to the wafer. And it's mass transfer-- it's called a mass transport, or mass transfer flux. And on the previous page, it represented step number two. So that's this flux, f1. Flux f2 is the actual reactant consumed by the surface reaction. So that's the surface reaction rate, or surface reaction flux. And that's what I meant by steps three to five on the previous page. So f2, refer to these surface reactions. So we have these two fluxes. We got to get through the boundary layer by diffusion, and we've got to react to the surface by some chemical reaction. And so where-- we're just going to write down simple equations for flux one, transport through a boundary layer. We're going to say that flux, the number of particles going through per square centimeter per unit time is going to be just proportional to the concentration gradient, cg minus cs. And the proportionality constant, we're going to call hg, which is the mass transfer coefficient, and typically have units of length per unit time, or centimeters per second. So that's the first flux. That's just transport through the boundary layer. Flux number two is the reaction at the surface, and is a chemical reaction. So we typically write that with its chemical constant, k sub s, some surface reaction rate in centimeters per second, times the concentration of the species at the surface. So again, this is a little cartoon of what we just saw. So in steady state, we're going to let this-- the fluxes through the boundary layer and at the surface, be equal. f1 equals f2 so we just equate equations four and five, which we just had. That's pretty simple. We can solve, then, for c sub s, the concentration at the surface, in terms of c sub g, the concentration in the gas. And we get its cg divided by 1 plus k sub s over hg. Now, we just need to define the growth rate of the film. So the growth rate of the film is this flux, f, which we now know. We can write f as k sub s times c sub s. And I just solve for c sub s in equation seven. So the velocity of the growth rate of the film is a velocity v. It's just the flux divided by n, where n is the number of atoms per cubic centimeter. Just you can just do this dimensionally. The velocity-- the flux has units of number of atoms per square centimeter per time. And this is number of atoms per cubic centimeter. So I just-- I end up with a velocity of centimeters per second, or centimeters per unit time. So we just divide this flux by the density. In silicon, for example, the density is 510 to the 22nd. Multiplying this out, what we get is we get this quantity, k sub s hg divided by k sub s plus hg. That times the concentration in the gas phase of the reactants divided by the density. This kind of makes sense if you look at it. Or you can look at it here in terms of this quantity y, where y is the mole fraction of the incorporating species. So it's the partial pressure of silane, for example, divided by the total pressure in the-- of the gas. And it kind of makes sense, as you might imagine, the velocity of growth, or the growth rate, depends on the concentration of the species in the gas phase. That's not too surprising. And therefore, it depends on the mole fraction, the partial pressure of the silane divided by the total pressure. So that's a very simple model. And the important thing is now to look at the dependency on k sub s and hg and what their temperature dependences are. So let's go on to slide 15 and look at that. So this is the-- equation eight, I just repeated exactly that same equation we just saw for the deposition rate. And you notice what it-- because of the way these add here, the deposition rate is going to-- determined by the smaller of the two of k sub s or hg. So let's say k sub s is very small. Then I can ignore it here and forget about it in the denominator. Then the hg's go out. And then in that case, if k sub s is very small, the velocity just depends on-- but I'm doing this case. Then it just depends on hg times this concentration. Let's see. k sub s is much, much less than hg. Then I can ignore it here. The hg's go out. Oh, and I end up-- it's the rate-limiting step. And you end up being proportional to k sub s. So in that case, we have what's called the surface reaction case. So this would be a case where the mass transfer through the boundary layer is very fast. You want to think about that, or you could call it in a case where the surface reaction is very, very slow. And so the growth rate in this case is directly proportional to the reaction rate. The important thing about k sub s is to notice is that it's exponentially dependent on temperature. So in this regime, we expect to see the growth rate exponentially proportional, or exponentially dependent, on temperature. On the other hand, if k sub s is very-- if the reaction goes very fast compared to the mass transport, or the mass transport here is slow, hg is small, then we have the mass transfer, or gas phase, diffusion control case. So here, the velocity depends on hg, which it turns out, diffusion through the gas is not a very tempered-- unlike reaction at the surface, is not very temperature sensitive. So we expect in this regime to have a very weak temperature dependence. So just, we can immediately see these two different regimes happening depending on the relative ratio of k sub s to hg, and therefore, depending on the relative temperature. So if we go to slide 16, what people find experimentally what these rate constants look like. k sub s actually goes like a constant, some number, k naught, whatever it happens to be, times e to the minus e over kt. So the surface reaction rate has a certain temperature dependence, it's exponential, and it has an activation energy of ea. So if I were to plot that on the growth velocity, or the growth rate, on a log scale, versus 1 over t, what-- the k sub s term is just going to be a straight line because it's just, I'm taking the log of an exponential. And therefore, I just get a straight line directly proportional to 1 over t. So down in this region here, the slope is directly proportional to the activation energy. Now, what happens is, as I go-- as I change the temperature here, and I go to higher and higher temperature-- so higher temperature means going to the left, right? Going to lower 1 over t. At some point, what we see is we come rate limited by the surface reaction rate by the mass transport rather. The surface reaction is happening so fast at this point that its rate overtakes that. And you-- the growth rate becomes limited by transport through the boundary layer. And that transport through the boundary layer is pretty much a constant. It doesn't depend much on temperature. It depends on pressure and the design of the reactor. So in this regime, we expect this to be relatively temperature insensitive. And of course, the growth rate is-- the net growth rate is neither one of these dashed lines. It's the solid line, which is the combination of the two, where we combine them just like we did here according to this equation, equation eight. Tells you exactly how to add them up. For example, if you are growing single crystal silicon, typically, the activation energy, just to give you a rough idea, is typically in the range of 1.6 to 2 EV. So that's what the slope down here will be. And again, hg is relatively constant as a function of temperature. So as an example, let's go to slide number 17, where-- which illustrates some data I took out of your textbook on silicon epitaxial growth. And this is all at atmospheric pressure. So the data is a little bit older. But what you see here is growth rate. And the units are microns per minute. And again, this is a logarithmic scale versus 1 over t, or actually 1,000 over t. So this is a typical Arrhenius-type plot. If you want to read the temperature, you can conveniently read the temperature right off here, going from say 600 all the way up to 1,200. And there are a couple different curves here. So here's the curve for silicon tetrachloride. Silicon tet doesn't react very well at low temperatures. So its growth rate is relatively slow compared to trichlorosilane, dichlorosilane, which is SiH2Cl2-- that's dichlorosilane, very commonly used today-- or silane. Silane's the most reactive of all of these. And so at any given temperature, it has a higher growth rate. You can see that. But look at the shapes of the curves. It's very similar to that previous model. Not exact. But you see for any given reactant, let's say silane, we have a region where the growth rate on Arrhenius plot is a straight line. So it's exponentially activated. This is so its mass-- it's limited here by a surface reaction. Very sensitive to the temperature. And then you finally get up to a high enough temperature and it starts to roll off. It never is completely constant. But it reaches a point where the temperature dependence is very small at high temperatures. And so and this-- we call this high temperature regime the mass transport limited. So you're talking about diffusion through the boundary layer. And down here, where it's clearly exponential, the surface reaction rate limited. So in the old fashioned silicone epi, people used to do deposition at very high temperatures to get high crystal quality. So in the old days, people typically grew at 1,00 or 1,150. So they were almost always growing with these reactants in the mass transport limited regime. So they were almost always hg controlled. So the old-fashioned reactors had to have the horizontal reactor configuration because mass transport through the boundary layer depends on exactly how the reactor is designed. Modern epitaxy often operates, however, people growing silicon germanium and other materials need to be grown at lower temperatures, like 800 or so, or 700, often operates in this regime now these days, where you're in the exponential regime. So we're controlling the temperature becomes extremely important because that-- in order to get good growth rate control. Yeah. AUDIENCE: [INAUDIBLE] JUDY HOYT: The question on slide 17 is about, what happened to the silane in the nitrogen ambient? You see this dashed line. You notice the growth rate for silicon using a nitrogen ambient instead of a hydrogen. These are all grown in hydrogen, which is typically used for higher purity. The growth rate pops way up at the same temperature. And the reason for that is if you go back and look at a couple of slides, let's see if I can find the chemical decomposition. On slide number 11, if you look at equation number two for the chemical decomposition of silane, silane and the gas forming silicon, solid silicon, it evolves. The reaction, when you grow epitaxial silicon with silane, it evolves hydrogen. So hydrogen has to come off. Now, this is a chemical reaction. So you know you can push it to the left. You can tend to slow down this reaction if I put a lot of hydrogen in the ambient. So using the hydrogen carrier gas tends to push this to the left, and tends to cause this surface reaction to be less probable. And the reaction rate goes down. But if you, instead of using hydrogen as the carrier gas, if you use an inert, more inert gas like nitrogen, you won't have that pushing-to-the-left effect. And so the growth rate goes up. AUDIENCE: [INAUDIBLE] JUDY HOYT: Well, let's see. In the transport limited regime, you're still going to have-- I mean, you still have-- well, there is a little bit. Yeah. Well, in the transport limited regime, it looks like we never really get there with this dashed data. They sort of petered out a little bit. So it's almost getting to the transport limit regime. But the problem is the surface reaction rate is so fast now, it's hard to see the transport limited regime. But it looks like they're converging. It looks like the two are converging. Because at that point, the reaction rate really isn't what's controlling it. It's the transport through the boundary layer. So the question is, well, what's the difference in transport between a boundary layer and a nitrogen versus a hydrogen? There's probably some small differences there. OK, so let's go on to slide number 18. We were just saying how transport through the boundary layer is important when you're in the high temperature regime, or the mass transport limited regime. And therefore, it turns out that the reactor geometry in the high temperature regime is very important, how you build the reactor. And I've taken this on slide 18 here. I've taken this picture from your textbook, figure 9.9. And what it's showing is the velocities in the boundary layer, the velocities are represented by these little arrows along the susceptor and going from left to right. So the gas is coming in at the left, flowing over the wafers and the susceptor, and going out the right. And according to some gas flow laws, Newton's second law, the boundary layer, the thickness of the stagnant layer, which is represented by the height of this layer above the wafer surface, this delta sub s, you see this height, or the thickness of the stagnant layer, is actually increasing. So and this boundary, this dark region, represents the thickness of the boundary layer. And you notice beyond the boundary layer, above it, the gas velocity has reached the same as the velocity as it is in the center of the tube. It's only within the boundary layer that the velocity is decreasing. So these arrows, their lengths are decreasing. So as you flow gas through a pipe against the surface, the boundary layer thickness is actually increasing. Now, this process of diffusion through the boundary layer, this quantity, the hg, the mass transport coefficient, it has to do with diffusion through that delta sub s, that thickness delta sub s. So people write in a crude way that hg is equal to some diffusion coefficient in the gas phase divided by the thickness delta s. But see, delta is not constant as the gas flows along the surface. So people try to manipulate things to try to make delta s more constant across the reactor surface. Otherwise, what will happen is the growth rate here will be much higher on the first wafer than it would be on the last wafer in the reactor. So people try to compensate for that. A certain type of reactor geometry is usually required, therefore, to get uniform deposition across a number of different silicon wafers if you're working in this mass transport limited regime. A typical trick is to tilt the susceptor. So take the susceptor and tilt it. And how-- this increases the gas velocity, which keeps delta sub s constant. So as I'm going, the gas velocity at this point-- so imagine I'm flowing the same volume of gas a certain number of liters per minute through this cross section from here to here. The gas velocity has some value. Now, if I go here, the cross section has decreased. So the velocity-- so again, I'm getting the same volume number of liters per minute. So the velocity of the gas in a smaller cross section tube must therefore go up. So v is increasing. now from left to right, which was not the case when the susceptor is flat. When the susceptor is flat, the velocity here is called u. And the center is the same from left to right. And so what we're doing is we're-- to compensate for this-- to make the boundary layer thickness perfectly flat, we tilt the susceptor to up that velocity in the center of the tube. So but there's another-- there's another issue, though, with non-uniformity. Not just so that-- and you can take care of the boundary layer effect by designing your reactor appropriately. The source gases themselves can become depleted along the length of the susceptor. So you have-- maybe have less silane back here than you have here because a lot of it may have reacted. What people sometimes do, then, is they sometimes introduce a temperature gradient along the flow direction. So you might make the temperature here just a little bit hotter than it is here and try to compensate for that, and try to get uniform thickness on the first wafer, and the fifth wafer in such a type of reactor. So if we go on to slide 20, now, there are some implications of this-- what we know about how CVD works now based on our simple model of surface reaction rate and mass transport late on the reactor design. If we look at a horizontal reactor, and this is what I was just-- this is supposed to represent the tilt susceptor. If the deposition occurs in the range of, say, one torr to 760 torr, relatively high pressures and relatively high temperatures, then the mass transport to the wafer surface is the most important thing. It's more important than the surface reaction rate. And that places some severe restrictions on the geometry of the reactor, and on the gas flows, and how the wafers are stacked. So typically in that case, atmospheric pressure, or relatively high pressure, epi, you'll see it in reactors that only have a few wafers in them. And the geometry of the reactor and how you stack them is really critical. So when the deposition occurs primarily at low temperatures, and low pressures-- so we're in the millitorr regime, or hundreds of millitorr, we're surface reaction rate controlled. So it's very sensitive to the temperature. But it's not so sensitive to transport through the gas boundary layer. And the mean-free path is long enough that the mean-free path is just don't come into effect. In this case, for this type of process, then, where we're working in the hundreds millitorr range, and the temperature is low enough that we're surface reaction rate limited, we can stack the wafers together relatively closely. So you see in an LPCVD process, which LPCVD usually works at around a few hundred millitorr, you can stack a lot of wafers close to each other. You still have to worry about gas depletion in LPCVD. So what people do is when-- assuming the gases are coming in here and going out, say, they're going out-- going from left to right, people will again tilt the profile of the temperature in the furnace. So typical LPCVD system will have three zones, a front zone, a middle zone, and a back zone. And the back zone is usually kept a little bit hotter than the middle or the front zone. Because again, the silane, or whatever you're depositing, tends to get depleted. So you make up for that by increasing the surface reaction rate a little bit by increasing the temperature. So you hope you can get the same thickness on the first wafer as you got on the 50th wafer, or whatever. And that needs to be done somewhat empirically. So on slide 21, I'm showing some classical geometries for epitaxial reactors. I took this from Simon Z's book. He has a VLSI technology book that he edited. There's a whole chapter there on epitaxy if you want to hear more-- see more details on it. These are some schematics of some of the common old-fashioned epi reactors. There was something called a radiant barrel. The barrel reactor was shaped like a barrel. The gaseous-- gas would come in at the top and exit out the bottom. And there would be sort of cylindrically shaped, but with flat surfaces, sort of susceptor. And you see each one of these circles is supposed to represent a wafer sitting on the susceptor. So you could get a fair number of wafers in it at one time. And notice, the-- they were trying to make to do the equivalent of susceptor tilting by decreasing the cross-section through which the gas flows from the top, which the wafer at the top to the bottom, where the gas exits. There are also vertical reactor geometries where the gas would come in and go out like this. And the horizontal, we've already talked about. So all of these were in the cases where-- are typically multi-wafer tools where you grow anywhere from five to 20 wafers in a typical epi-- in an epi batch. It's the more classical, or maybe old-fashioned, type of reactor. Slide 22 is a more modern epitaxial system that you will see in fabs today, particularly fabs that are growing silicon germanium for bipolar transistors, or even silicon germanium now used in CMOS in source drain regions of CMOS intel has a commercial process. In both of those cases, bipolar or CMOS, people have moved to-- away from the old fashioned more to the more modern single-wafer epitaxial, silicon, in order to get the kind of uniformity they would need over a single wafer. As the wafer diameter has increased, people are now at 12-inch wafers. The reactor would become gigantic if you had 12-- 12-inch wafers along a system. It's much more efficient to do a single wafer in a chamber at a time, and get the throughput high enough that you can do that. So this is an example of a system that's manufactured by Applied Materials. We actually have one here at MIT. This is sort of an overall view of the system. What it has is two different load locks. So the load locks are designed so you can open them up to air without putting air into the reactor-- into the reactor itself. So you can keep the reactor very clean. So you can open this door, put the wafer cassette in, and close it, and pump it down, get most of the air out. So you don't just open up the whole reactor, and get all the air in, and contaminate it. Because there's a couple of different load locks. There's a transfer-- a chamber in the center called the transfer chamber, which has a robot in it. And here's a picture of the robot. It has two little arms. Looks like a frog arm. The robot will come out, grab a wafer from the cassette, put it into the transfer chamber, and close this door, again, so you don't allow any air to get in. And then it'll take the wafer, according to how it's programmed, and put it in one of these three chambers depending on what is-- how you've programmed it. And there-- you can have up to three chambers here growing different materials, silicon or silicon germanium. So they increase the throughput by having three chambers instead of having three wafers all along the susceptor in a single chamber. Typical growth pressure in this type of equipment is in the 1 to 100 torr range. So both-- it turns out both surface reactions and mass transport are important. Growth temperatures can be anywhere from 400, very low if you're growing some special material. Typically, silicon germanium is grown between 600 and 800. And high-temperature silicon can be grown up to 1,100 in this type of system. Just taking a little more zoomed-in look at this more modern reactor on the slide 23, this is that same picture. The only difference is here, we're showing in a single chamber right here, one of the chambers. And this green object is meant to be a wafer that is sitting on a susceptor that's spinning. So the wafer is spun at high velocity, say 30 revolutions per minute, to try to get good uniformity across the wafer. And the gas comes in here through a series of injectors as a plane flows across the entire wafer. So the inject is over here. And it flows laterally across, and the exit, or the exhaust, would be over here. And the wafer is spinning. So the chamber is designed with both in mind. Both the control of the boundary layer is important as well as the control of the temperature. There's a lamp bank that sits on top and bottom that tries to uniformly control the temperature from center to edge to get good uniformity of the epi. So that's a more modern single wafer piece of equipment. So let's just go back one more time to think about slide 24, the basic example of silicon epi. To show the slide before of temperature effects, depending on the relative speed of the surface reaction rate, and the mass transport through the boundary layer, you'll be in one of these two regimes probably. This curve on the lower right is interesting. It shows the reactant flow, or the partial pressure, effects. So this is-- before, I was showing growth rate versus 1 over t. So these are at fixed temperature. Each curve is for a fixed temperature. And I'm showing growth rate versus the percent of the reactant. So this is the percent of dichlorosilane that's in the gas stream. Or you could think of it as the dichlorosilane flow. So interesting, if you go here at lower-- with dichlor-- in this particular reactor, again, this will be reactor dependent. At 840 degrees, the growth rate is independent of the flow of the dichlorosilane. So it doesn't matter. You can increase the partial pressure by a factor of 2 or 3. You're not upping the growth rate anymore. You have enough there. And you're really limited by the set of the partial pressure limited by the temperature, the rate at which the reaction can occur. So at low enough temperatures, it doesn't do you any good to flow more gas. As you get to higher temperatures here, say at a thousand, you can see that the growth rate is pretty much almost linearly proportionate up to a certain flow. It's proportional to the flow, or the-- because basically at this point, the surface reaction is going faster and faster as you give it more reactants than at some point, it saturates out. So depending on what temperature you're in, increasing the flow may or may not increase your growth rate. So we go to slide 25, there's another process that we want to talk about. We've talked a little bit about design of the reactor for growth rate uniformity and things like that, and epitaxial growth. But the uniformity of the layer thickness is really only one aspect. And these days, it's pretty well nailed. But another aspect that's just as important is called autodoping during epitaxial growth. And it's sort of unique to epitaxial growth. We don't usually think about it as being an issue in deposition of other types of films. But autodoping, as the name implies, is automatic, or unintentionally-- unintentional introduction of dopants into the epitaxial growing layer. And there's a couple of different regions it can come from. First, before we look at the reactor at the top, let's just-- look at the-- this diagram at the bottom of slide 25. And it's a plot of the concentration on a log scale of a dopant as a function of your distance into the epi layer. So if you-- it's a little bit backwards, but going from 0 to here is your distance into the epi. So the epi is getting thicker from left to right. And everything to the left of 0 is the substrate. So here, I'm in the substrate. So what we're doing in autodoping, you can imagine, I might have a substrate that's heavily doped, say, for a silicon bipolar process. So I have a substrate that has some concentration at the surface. And some of that dopant is going to make its way into the epi layer as the epi layer grows. And one process of-- say I'm trying to grow a lightly-doped epi layer on a heavily-doped substrate. One process by which it will get in there, you already know. And that's diffusion. And in fact, we know it's going to be a complementary error function type of profile if it's simply up diffusion from the substrate into the epi layer. So that's relatively simple. We know how to solve that. And we can limit the temperature and all that. And that will help us limit how far this autodoping goes into the epi layer. But there's another process here, which I'm showing by this sort of exponential dependence, exponentially-decreasing function. It has a certain decay length. And that's called front side, or vertical, autodoping. And vertical autodoping refers to the fact, not so much up diffusion, but just the fact that as I grow a single layer, if there is a phosphorus atom there on the surface, it tends to want to be up on the surface. So it tends to want to exchange sites instead of being buried with the silicon. And the phosphorus will ride up the surface. Same thing for arsenic. These dopants tend to like to be on a free surface. So as you're growing the crystal, they will ride themselves not by diffusion, but just by surface exchange at the surface. They'll ride themselves up the epi layer. And it's called vertical autodoping because it happens directly from the buried layer. It just goes vertically straight up. So that surface riding effect is really a function of the nature of the dopant and the temperature to a certain extent. But the nature of the dopant, it turns out the n-type dopants, arsenic, phosphorus in particular, like to be on the free surface. They're very happy being on a free surface. So they float to the surface. And they make it very hard to make a lightly-doped epi layer because they tend to autodope. They tend to have a long tail. Boron on, the other hand, doesn't mind being buried in the crystal. So boron doesn't have much autodoping. It does not surface ride very much. So this tail is not a diffusion-limited process. It has to do with the affinity of the dopant for the free surface during the growth process. And then there's this final background that you'll see in an epi layer, which is I'm showing here in green, called-- sometimes called backside, or lateral, autodoping. And what that refers to is you can get-- say if the back of the wafer is heavily doped, when you heat this thing up, you can actually get dopants coming off the back and finding their way onto the front of the wafer. They could be coming off the susceptor. Let's say you just grew a doped epi layer. The run before this, you're growing doped epi. And you put in some phosphorus. So that phosphorus now can still hang around in the susceptor. And then it can be transported when the susceptor is heated onto the wafer. So it could be from the back of the wafer, it could be-- from the susceptor, it could be from another wafer that has a heavily-doped region on it. So it's lateral because it's coming from somewhere else. And this is very often a function of what you just grew, maybe a memory effect that was grown in the reactor, and how it's designed, how well the susceptor is sealed, and things like that. So in the epitaxial growth process, we don't worry just about growth rate and its uniformity. We have to think carefully about temperature, and pressure, and the nature of the reactor design to avoid this type of autodoping effect. Slide 26. Let me-- let's say a little bit about epitaxy has evolved in CMOS and bipolar technology just so you have a view of where it comes from. Originally, in the old days, epi was originally primarily used just to provide a lightly-doped layer on top of a heavily-doped substrate for either bipolar or CMOS device processing. So epitaxy in the old days, and there is no chalk, because-- we had colored chalk last time. So it was simply a way if you had a heavily-doped wafer, and you needed a lightly-doped layer on top, maybe 10 microns, it was a way to produce that. There really wasn't any other way. And CMOS and bipolar foundries, or companies, didn't do epi. They bought epi wafers from one or two places in the world, half a dozen places where people grew it. So you would just buy an epi wafer. There's-- that's still a very large market, a very large component of epitaxial technology. It's simply to produce wafers that are heavily-doped substrate, and you need a lightly-doped layer on top. But actually, the last 10 or 15 years, the technique really has evolved. And now, it's used to do much more than that. People grow very complex doping profiles. They grow layers of different materials. Silicon germanium is grown on silicon, for example. It could be grown selectively. That means in only certain regions of the wafer where there are openings in the oxide, could be-- selective epi can be used for the source drains on top of the source drains to elevate them. It could be used in the active regions of the heterojunction bipolar transistors. So epi has gone from being just a type of wafer you buy to the point where different-- where manufacturers of silicon chips now, a lot of them have epi reactors in their fabs. And they use them. Thanks. And they use those reactors to actively grow structures. So as these-- as this evolution took place, several things had to happen. And that's why these new single wafer reactors have come out to a certain extent. This-- the cleanliness of the system and the purity of the gases had to be improved. And that's because the epitaxial growth temperatures are going down. And there's a tendency for impurities, like oxygen, to be incorporated much more readily at low temperatures. So in order to deal with that, as we lower the temperature of growth, we need to have a cleaner reactor. Otherwise, you're going to put a lot of oxygen in your epitaxial layer. Again, and pre-clean temperatures have also lowered. So the reduction in the growth temperature and reduction in the pre-clean temperature means epi now has a lower thermal budget. It used to be 1,150 degrees for an hour. Now, people grow silicon germanium layers at 700 for 10 minutes. Something like that. So it can be a relatively low thermal budget process now. And that's important. Because that means you can have epitaxial-- you can have other dope structures on the wafer, and not have to worry too much about TED, or other things going on. But in order to have low-temperature growth, you really have to have a clean reactor. So the new paradigms I've already showed you are single-wafer tools. There are rapid thermal epitaxy systems. So just like rapid thermal annealers, but you can grow epi in them. And there's also something that is not used in production very often, but is used in research. It's called molecular beam epitaxy. It is not a manufacturing process generally because the throughput is horrible. But it's a good-- it's very good for research. It's good for growing new types of materials for the first time. And then people figure out how to grow them by CVD. And they go from there. So that evolution that I just explained in words is shown here pictorially on slide 27 in the 1970s, probably before you guys were born, maybe. Or you were just being born. I don't know. Something like that. No, no. That's before your-- yeah, yeah. That's a long time ago. This is what epi was, the concentration versus depth in a bipolar transistor. The epi was simply formed-- the whole-- the collector. You just grew a silicon epi layer, you would buy a wafer with 5 microns of epi on it. And on top of that, you would form-- you would diffuse in or ion implant your base, and then you diffuse in your emitter. And you make an NPN bipolar transistor. And the epi layer was pretty thick, 5 microns. In the 1980s, again, it was silicon, epi, and the epi layer was shrunk by a factor of 10. Bipolar transistors, the total thickness was about half micron. So in those 10 years, the epi layer thickness had to drop by a factor of 10. People had to learn how to do autodoping reduction. They had to change the pressure and the temperature to try to reduce this out diffusion and the autodoping of this n plus buried layer into the active region. And finally, the 1990s, what people started to do, instead of just growing the collector, people started to grow not only the collector by epi, but the base. Instead of using an ion-implanted, look at this-- you can see this very broad profile with a retrograde. That-- not necessarily what the device designer wanted. That's just what they got when they ion implanted it. In fact, you can grow a very thin, narrow profile with a very abrupt base doping by epitaxial growth. And so epi transformed from just growing the substrate to being able to grow the base layer. Either a silicon base or people then inserted silicon germanium into it. And IBM and others commercialized the silicon-- silicon germanium, silicon heterojunction bipolar transistor. And that was all based on the development of silicon and silicon germanium epitaxial-- epitaxial growth technology. So it's really come from tens of microns. Now do people grow typical base today in a heterojunction bipolar transistor is 400 angstroms. So people are growing 400 angstroms in silicon germanium commercially in production, and thousands of transistors on circuits all the time, and selling them. So what are the most common gases here on page 28 today for epi, silane is quite common. And we saw the decomposition reaction before. Dichlorosilane is also popular, has a lower growth rate, but it gives you a better selectivity if you're trying to deposit the epi in windows. It's not too unusual to have a wafer where you-- in cross-section where you have oxide like this and you've patterned the oxide. And you can grow, it turns out, by a process called selective epi. You can grow it in such a way that the epi only grows in the silicon, where the silicon is exposed. So you get epi here and you get epi growth here. But you get nothing on top of the oxide. So it's automatically sort of self-aligned. And this is called selective epi growth. And to do that, you need dichlorosilane, and this is the reaction-- the silicon plus H2 plus Cl2 decomposes into another species. And that eventually decomposes on the surface to grow. And it forms HCl. And the HCl is important, it turns out, in etching the silicon off of the oxide, sort of keep it so you only deposit in the holes in the selective epi growth. So those are two very commonly-used gases. All right. Let me just summarize. So far on thin film dep and epi, we have two main types. We said there's chemical vapor deposition, which we talked about today, and physical vapor deposition, which we'll talk about in the next couple lectures. Traditional epi growth uses atmospheric pressure. More modern systems use low pressure methods, typically in the 1 to 100 torr, these days. We have a very simple model we developed for atmospheric CVD. And it led to rate-limiting regimes. Surface reaction rate, where the growth rate is exponentially dependent on temperature, activation energy of about 1.6 to 2 EV. And then at higher temperatures, you become mass transport limited. And you have very little dependence. Very slight dependence on temperature. The mass transport limit regime is good in some ways. It gives a high growth rate. But the bad thing about it, it's very sensitive to how you design your reactor. So it's a little bit tricky. Reactor geometry has to be well designed. LPCVD in the 100-millitorr range, in a hot well batch reactor is used every day in fabs, particularly to deposit things such as a polysilicon gate, low-temperature oxide, and silicon nitride. And we're going to talk about these three types of films and this type of process next time. All right. That's all I have for today's lecture. Just in case you came in late, the clipboard is going around. Make sure today, you put down on that clipboard your topic. I really want to have them so I can approve them. Tuesday is the deadline on that. Also, if you didn't pick up any prior homeworks, they're in the back. [AI Auditory Hallucination] |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 7_Oxidation_and_the_SiSiO2_Interface_2D_Effects_Doping_Effects_Point_Defect.txt | JUDY HOYT: The curvature of the surface, and they want to look at the oxidation rate as a function of how curved the surface was. So this is a top view or a bird's eye view here of his structure in part A. But look on slide 3. Part B you're looking at a side view here. So you can see he had etched this cylindrical structure in the top. It's got a circular pattern. After preparing this surface, he then put it in the furnace and grew this thermal oxide on all those free surfaces, which is shown here in blue. That's the silicon dioxide. So that was a real experiment. And then, just like we saw in the previous photo from Marcus and Shang, he put down-- he deposited polycrystalline silicon, and that's simply used for contrast during the SEM analysis and to protect the oxide. So he has got this polysilicon over everything, so from a top view all you would see is the poly. And then to reveal this oxide and what it looks like looking down, he then lapped or polished off the top, the poly. So he's about halfway down the pillar, partway down the pillar. And then a top view is shown here in the lower right. You can what he would see. You'd see this polysilicon, which is, again, just a contrast medium. But you would see this white ring here is the silicon that was initially oxidized, and the blue ring all around it is the oxide that grew on the inner part of the cylinder and the outer part of the cylinder. And then you can vary the cylinder size. He can create a whole lot of these cylinders on the same wafer. And look at how the oxide grows, what its oxidation rate is as a function of the radius of curvature. So it's a very nice scientific experiment to look at these dependencies. If you go to slide number 4, these are some actual-- the upper left is a scanning electron micrograph. Not transmission, but it's an SEM, so we're looking at backscattered electrons. And there is some contrast associated with the different materials. And this is sort of a typical result of a plan view SEM. And an artist's drawing of what is seen in the SEM is shown down here. So just looking first at the SEM, this wall right here-- so now we're looking down on the chip or on the surface of the wafer, and he's etched a wall. Right? Looking down along certain directions. This is probably along the 110, the 110 equivalent, 110 planes. He's also etched these regions here. Look at on the upper right. This is a cylinder of silicon. This darker region is the SiO2, and surrounded by that is polysilicon. If you want to look down here, it's a little bit easier. Everything's labeled. The thing you see right away is that the oxide grown is thinner for both a concave and a convex corner. So here's an example right here where the silicon was rounded, but it's in a concave corner. And the thickness of the oxide, which is this dark region, is much thinner here than it is on a flat surface here or here. So it's thinning up when you're oxidizing a corner, and even a convex corner as well, the oxide growing is a little thinner. In fact, the effect seems to be a little more pronounced for this concave corner where it really looks like there's not much oxide grown here at all. So let's say we're just talking about the oxidation of the cylinders. Let's say just the silicon cylinder, this pillar right here, and there are differences in the oxidation rate along the different directions. What would you think? People have any suggestions? Anybody have any ideas? If I take a cylinder of silicon that was etched-- the wafer was originally 100 and I have a cylindrical surface. Let's say I'm growing in the thin oxide regime. Why do you think you might get different thicknesses going around the perimeter of the cylinder? What's one reason? Any ideas? How about thinking about what's the orientation of those planes as I make a cylinder. If I etch a cylinder, I'm exposing all different planar orientations. So locally, at each point in the cylinder, there's sort of a different crystal plane that's exposed. So you might expect, if you're in the thin oxide regime, at least there might be some differences due to the fact that I'm looking at different orientations. So that's one effect. Any other ideas on any other potential effects that you might think about? Etching a small cylinder compared to etching a flat wafer. Orientation is one. All right. Well, let's go ahead and see, then we'll give the answer away in the next couple slides. In fact, you go on to slide 5, and it doesn't completely give you the answer but it shows you at least the results that Gao got. This is a plot from his work, and the y-axis is the normalized oxide thickness. So this is normalized thickness to the thickness he grew on the flat surface of the wafer. So he measured the thickness on the flat part and then he measured the thickness on these cylinder sides and he just divided. On the flat part he grew about 500 nanometers or half a micron of oxide on all surfaces. And by the way, a flat surface here-- he's plotting this normalized thickness as a function of 1 over r, where r is the radius of the cylinder. So if you're on a flat surface, the radius of curvature, the radius is infinity. So 1 over r is zero. So right at this point zero, this corresponds to-- and all the curves converge, that means you're oxidizing a flat surface. As you go out to higher and higher, 1 over r, that means that r itself is getting smaller so you're getting a tighter and tighter-- a smaller diameter cylinder just so you get yourself calibrated. And he's got two different types of curves here, convex radii and concave. If we just go back one-- whoops. One way, one slide to slide number 4, here's a concave cylinder. So you're oxidizing on the inside of a surface, like you're in the cave here so you're oxidizing. And here's a convex surface. You have silica and you're oxidizing on the outside of the cylinder. So you can do it either way. You can imagine just geometrically, knowing that oxide has to expand, it would be different if you're oxidizing inside a cave, in a concave area, versus outside on the outer surface. And in fact, you see this. Look at the concave radii, all these dashed lines compared to the solid lines. The normalized oxide thickness at a given temperature, say here at 900 degrees, is a lot thinner for the dashed compared to the solid. So a concave cylinder, he found a lot less lower oxidation rate, oxidizing the inside of the cylinder as opposed to the outer surface. And in addition to you see that difference right off the solid versus the dashed, the other thing is look at it as a function of temperature. 1,200 degrees, it's pretty darn flat. 1,100, almost flat. So at those temperatures, the oxidation rate doesn't seem to depend very much on the curvature of the cylinder at all. It's oxidizing equally rapidly. But at low temperatures, so should I go down to right between 1,000 and 900, you get a big retardation in the oxidation rate. So there's a much more pronounced effect when you're at low temperatures. That's kind of consistent what we saw in Marcus and Shang, the 950 degrees C pillar oxide had a much more of a corner effect. So this is scientifically in agreement with that qualitative result. OK. So this was the actual data that he got. And given this data, then he had to come up with some kind of explanation. That's what we're going to talk about. What are the physical mechanisms to explain his results? The first one we already said because we know it from last time, relatively simple, is the fact that the crystal orientation of the silicon is changing along the surface of a cylinder or any surface that's not flat. And we know in the thin oxide regime, that impacts the oxidation rate. So at sufficiently low temperatures or when he was growing the thinner oxides, that's going to have an effect. Now, of course, it's said that on the flat regions he was growing 5,000 angstroms. That's not particularly thin. So depending on the temperature which he was growing that at, you probably don't expect a huge effect in his results from crystal orientation. But nevertheless, you have to take into account. If you're oxidizing a trench and you're growing a very thin oxide, the orientation will impact the rate. We saw that last time. These other things, though, are more telling and more having to do with what he actually saw. The two dimensional diffusion of the oxidant, you need some kind of numerical technique to solve the diffusion equation in multiple dimensions. So if I just go back a couple of slides-- let's go back to slide now, say, number 4 and look at oxidizing this concave corner. As the oxide grows, the oxygen or the water, whichever is he's using as the oxidant, has to diffuse in two dimensions through this structure. So any changes due to orientation or whatever in that thickness in which it has to diffuse through are then going to affect-- eventually will affect the diffusion through that. So we have a two dimensional problem now. Before we had a simple one dimensional. Deal-Grove we just had to diffuse from the top surface to the interface. Well, we have to do the same thing but now in a two dimensional problem. So as the shapes change, we'll see diffusion of the oxidant will take effect. Let's go back then to slide number 6. The third one and probably one of the most important in explaining Gao's results is the stress due to the volume expansion. We know that oxide layers that are formed on silicon are under compressive stress. OK? So they're kind of being compressed to get onto the wafer. Even when they're in the planar case, right? Just on a planar silicon wafer surface without any curvature. These stresses can be increased quite a bit on a curved surface, because the volume expansion is going to be confined dimensionally. So here's a way of picturing this on a curved surface. So the central region that's colored in gray, let's say that's my silicon pillar. Let's say I've grown a certain amount of oxide already that's on this outer region. That's the oxide that I've grown, and now I'm just growing a little bit of new oxide. Remember, in oxidation the reaction takes place at the interface, not at the surface. So to grow this little band here of new oxide, let's say a few angstroms, 10 angstroms that I'm growing, the oxidant has to fuse into the silicon surface, has to react at the surface, and then the new oxide has to push out on the old oxide to make room for itself. It's got to push up and push down and consume some of the silicon. When I was on a flat silicon wafer, the oxide could just push up and push out a little bit at the edge of the wafer if it had to. But I'm not. I'm on a curved surface here where the oxide is completely attached to itself, so to speak. So it has to actually expand along the direction of the circumference in this direction. So it has to do work, more work on the oxide above it than it would have to do on a planar surface, because it's really confined all the way around that surface. Really, the curved surface has got to flow in order to get this to happen So you have the solid material this glass that has to essentially flow. So there's a certain amount of stress built up as a result of that. The deformation of the surface can be quite large. Forget about the cylinder for now. That was Gao's experiment. Let's do something a little more realistic, like LOCOS. What is LOCOS? Remember, LOCOS is you start with a silicon wafer. You pattern part of it with silicon nitride under which the oxidant-- through which the oxidant cannot diffuse. So you're locally oxidizing. You're not oxidizing over here on the right in this lower right picture. You're oxidizing over here, only over here on the left. So you're locally growing a thick oxide. But look at the surface. The top surface of this oxide here. Remember, at one point when it first starts out, it's relatively flat. This top surface has to stretch, in order to maintain this shape, by as much as 15%, 20%. So the oxide is itself having to stretch and deform. All that involves doing work and stress buildup. Even, for example, also the nitride itself is pushing down on the oxide. In order to lift this nitride layer, the oxide has to do work. So there's definitely stress buildup in these regions near corners or on curved surfaces, and that slows down the oxidation rate. So how does it slow down the oxidation rate? Well, let's look, for a moment, at slide number 7. These stresses can impact a couple of things, both transport of the oxygen through the oxide, like a diffusion process, and the interface reaction rate. And here's an example again. This is showing a small portion, a quarter or a section of a silicon surface that's curved because it's been etched or whatever, and the oxide that's already been grown here is on the outside, this annulus shown in the medium gray. This little white annulus, this little white region, the nascent oxide-- nascent meaning new. It's the new little piece of oxide that's grown. This little growing oxide has to do mechanical work against the oxide above it, because it has to push it out, and it's constrained. So this mechanical work actually can modify the energy, the activation energy of the process, the amount of energy that it takes to do that process at a given temperature. So what we write is EA, the activation energy for the surface reaction rate under the stress case. When we're oxidizing in a curved surface, it's going to be equal to EA on a flat surface, whatever that might be-- say two electron volts-- Plus some extra terms. And there's a term that depends on the normal stress. Sigma n is the normal stress. Normal, so it's in this direction, in the radial direction. The sigma n times some fitting parameter VR plus sigma t is the stress in the tangential to the growing surface, so along the circumference, times some other fitting parameter volume Vt. So then k sub s in the curved or stress case is just the ordinary k sub s times these two exponentials-- again, this is an activation energy-- minus sigma n VR over kt and a sigma t term. So basically the surface reaction rate is going to go down. The amount it goes down will depend exponentially on these two stress terms, and these are fitting parameters that we put in-- essentially you can put in the model. And so you can see then how you can introduce a temperature dependence to this effect. Depending on how strongly temperature dependent is will depend on the numbers you put in for VR and Vt. OK. So let's go on to slide number 8, and that was the surface reaction rate. You can intuitively understand how that surface reaction might be affected by the fact that it takes more energy. You have to do work in order for that reaction to take place that you didn't have to do on a flat surface. How about the diffusivity and the solubility? Well, how could that be affected by stress? Well, this is meant to be a highly schematic picture of a ring structure that would exist in silicon dioxide where these black regions of black circles are the silicon. We have oxygen here as well taking up a certain amount of space, say, in the bonds. There is some sort of region in here where there is sort of free space or interstitial space where the oxidant can diffuse through. And let's say p is a hydrostatic pressure in the growing oxide, so there's a certain amount of pressure in the solid. Well, as you put the solid under pressure, it reduces that. You can imagine it reducing the interstitial space in the network. So you have this network. Although it's somewhat random, it has this structure, and inside the network there's this open space. As you put the whole solid under pressure, but that bond's compressed, the amount of interstitial space in the network is going to be reduced to a certain extent. And if you're oxidant, your water molecule, your oxygen molecule has to diffuse through that interstitial space. You can imagine the diffusivity or the diffusion rate is going to go down. As I compress on the oxide, it makes it harder for the molecules to diffuse through. It might even affect the solubility, as you might imagine, hypothetically, that the amount of water vapor molecules or oxygen you can stuff in there maybe would be affected by how much pressure this solid is under. So again, mathematically we can do what we did before. We write the stress diffusivity as the unstressed-- or not unstressed, but the normal planar diffusivity through the oxide times something that goes exponentially it depends on the pressure, which is the hydrostatic pressure is due to the oxide stress, times a constant, which is a fitting parameter VD over kt. So again, D will go down as you go under stress, diffusivity. And you might hypothesize the same thing for the solubility, a similar dependence. OK. Let's go on to slide number 9. So just as shown in your text, I'm just summarizing then the impact of stress on our basic oxidation parameters that we know from our Deal-Grove model. The parameters from Deal-Grove, k sub s, the reaction rate D and C star all get modified in a way that depends on the stress in an exponential fashion. So it's very sensitive over kt. So the stress effect will be different at different temperatures. So these parameters, these VR, Vt, VD, and Vs, they're reaction volumes. They are designed in SUPREM-IV. They are put in as fitting parameters. So they have some default values, but people typically modify them as needed to fit the shape that's observed experimentally from LOCOS or from a pillar or from your structure. In practice, this number V sub s ends up being zero, which essentially people believe that the solubility is not dramatically itself impacted by the stress. And so this extra term here, e to the PVs over kt equals to one. And so people generally believe it's the k sub s and the D that are being modified by the stress. OK. So let's go on to page or slide 10. There's one more parameter that we need because we said the oxide is a glass and it's flowing, and the glass flows differently or its ability to flow depends on temperature. And that is expressed through a viscosity parameter, eta. And in fact, the stresses that are in the glass or in the oxide are high enough that the viscosity itself, its rate of flow actually is a function of the stress in order to model it accurately. So this is an equation that's been derived and that is used in SUPREM that gives reasonably good agreement with the experimental observations. So the viscosity as a function of stress is equal to the stress independent viscosity eta, but it's a function of t, temperature, times these sort of parameters. A shear stress in the oxide, so it's actually linearly dependent on the stress in the oxide. So if this goes up, the viscosity goes up, and something that depends on the hyperbolic sine. So that's just an equation that people have derived more or less empirically. So we have these parameters that are changing with stress. So let's go on to slide 11. You can probably get the impression by now there's no way you can easily do these calculations by hand. Deal-Groves, piece of cake. You sit down. You just simply solve or integrate the equations. They're all analytic. These are non-analytic. The shape of the growing oxide changes with time. k sub s, D, and eta all change with stress, and the amount of stress is changing with time as the oxide has to push on a thicker and thicker thing above it. So with all this time dependence, you really need a numerical simulator. You need a computer to do these calculations for you and to integrate them in time. This is exactly what has been done in SUPREM-IV, which can model the shapes of oxide on curved surfaces. You'll have a chance to do this on one of your homework problems, hopefully. And this is a reasonably common thing in IC technology, that you don't typically always grow an oxide over the entire surface. Very often there are patterns on the surface. In fact, the very first step is to isolate the active area from the non-active region, and it's often done by the process called LOCOS we talked about the first lecture. Poly buffered LOCOS, slightly fancier version, and shallow trench isolation. So here's an example of LOCOS. Remember, you start with a silicon substrate. You put down a thin oxide, which is a pad oxide, which reduces the stress on the silicon due to the silicon nitride, and you put down silicon nitride and pattern it. Silicon nitride is a perfect or reasonably good mask, as oxygen cannot diffuse through it. So the oxidant diffuses in here and only oxidizes on the left side. So this is what the local structure just in your mind, as a starting structure before you do oxidation, that's what it looks like. Let's go on to slide 12 and see what that LOCOS structure now looks like in a simulation, the SUPREM simulation, illustrating the stress effects. So there are two simulations that are shown here and let's start on the left. And what this is, the y-axis is in microns so that's vertical in depth from the silicon surface, and the x-axis is in microns as well. This point right here corresponds to where the mask edge is. So we're doing a two dimensional simulation. And what you see is this upper region is the silicon nitride, as you can see, and the yellow region underneath is the silicon surface. Look how the silicon surface is shaped in this case. And the field oxide is out here. Now an important parameter that people care about, if you're scaling silicon circuits, is you want to be able to put the devices closer and closer together, as close as you can get them. So in order to be able to do that, you need to grow the field oxide where you want it, and you don't want field oxide in the active region. Any part of the field oxide that encroaches or pushes its way into the active region means that it's scaling the active region in which you can make your device so you don't have as much control. You don't get a nice, well defined active region. So in this model, you can see what's happened is the field oxide has encroached in. The mask edge start here at x equals zero, it's encroached in from x equals zero by about-- oh, about, say, half a micron. The oxidant has diffused in here and has oxidized underneath the nitride. So you have this region where the silicon is sloped and the encroachment distance, I would quote here as being half micron. So your active device really starts here in this region. So this is the model run, though, without stress. And people did these models and they found, oh, it doesn't really agree with what we get experimentally when we grow LOCOS. We do a cross-section and we look at it. It's not what it looks like. In fact, what they found it looks more like what's on the right. And what's on the right is the SUPREM-IV model, but this time turning on the stress dependence. OK? So you have the diffusion rate through the oxide, the surface reaction rate are now all going down as a function of stress. And where is their stress? Well, there's a lot of stress in this region here, in this corner region, because the oxide has to push up against this nitride film. So what you see here is look at the amount of oxide that was grown here in this bird's beak area. There's quite a bit of it has grown, the thickness here. Compared to, in the bird's beak area, when you turn the stress model on not so much oxide has grown. OK? And as a result, with stress, the amount of encroachment, which I'm marking between these two lines now here, is quite a bit less. It's about a factor of 2, maybe. Here it encroached-- I'm sorry. I said 0.5. It actually encroached by 0.6 microns. The encroachment here, according to my eye, is about 0.3. So it's about half. That's because stress was included in the model. This turns out to be a lot closer to what people actually measured when they do LOCOS oxidation. That's for LOCOS. You say, well, not too many people do LOCOS in manufacturing. Well, there is still some LOCOS or LOCOS-like structures being done. People nowadays do shallow trench isolation or STI. Well, we still get corner effects and stress effects at STI. And I've taken this from an article or a short course given at IBM back in 1998. And what is shallow trench? Remember what we have. We have an active region here shown in the center under the gate, and isolating the devices we have field oxide, which is this thick oxide on either side all the way around it. In fact, goes all the way around. We've cut a cross-section here. And we do this by first etching a trench in the silicon and then doing some oxidation. We do a little thermal oxidation of that sidewall. And the engineers, the electrical folks, are very interested in the exact shape of this corner, because it turns out, depending on the exact shape of that corner and how thick the thermal oxide is, you'll get electric field spikes or build ups right there, or you'll get a region where the oxide is weak. And tends to break down and when the gate goes over that, the device can have breakdown effects, or you can actually cause unusual transistor effects just based on the shape of this corner and this oxidation. So there are two different simulations shown here. Up on the upper right is without any stress. And you see the thickness of the oxide as you go around the corner is reasonably uniform. It doesn't change too much. With stress effects, look at the corner. The oxide thickness is quite a bit thinner here. It's thinned up in the corner region, compared to on the flat surface or on the vertical surface. And this is a problem because that oxide gets too thin, you're going to tend to have breakdown effects or it's going to affect the electrical properties of this device. So it seems maybe to your eye initially, oh, what's the big deal of that? But when you go to simulate how these devices electrically behave, it makes a big difference. So the stress effect here comes in because, again, you have this curved surface that needs to stretch, and there's stress buildup whenever you have a corner. That tends to lower the oxidation rate. And that's also been observed experimentally. So let's go on to slide number 14. Stress also has another interesting effect. Let's say you're not doing shapes. So you don't do LOCOS, you don't do shallow trench. You're a very simple process person. You only oxidize flat wafers. OK. Fine. You can still see stress effects in the form of the history, of what the history of the oxide itself has gone through, what thermal history. Because we know even planar growth has some intrinsic stress during the oxide growth. The oxide is under a certain amount of stress even when it's planar. It's just under more stress when it's curved. But this stress can relax upon annealing. At a high enough temperature, above 900, 950, the glass can flow a little bit and the stress will relax. So when people measure the Deal-Grove linear parabolic rate constants that describe oxidation in this intrinsic stress state, the 1D stresses are already accounted for. OK? But let's look at the experiment that's shown up on this slide. Let's say we do a two step oxidation. And you decide, OK, I take two wafers. I put one in the furnace at 1,100 degrees and I grow a certain thickness of oxide. Say, 1,000 angstroms or whatever. And I put another wafer identical in a furnace at 800. I grow the same thickness, 1,000 angstroms. So I have two oxides, both the same thickness, grown at different temperatures. However, they look the same but they're not exactly the same. The intrinsic stress in this 1,100 degree C oxide is going to be less than the intrinsic stress in this 800 degree oxide because the higher temperature, the oxide can flow more. There isn't as much built in stress. OK? So you say, well, this is under more stress than that. What's the difference? OK, maybe you'd see a little less, little difference in wafer curvature. That's a way you could measure it. All right. That's fine. Now you go to the next step in your process, says, I want to grow another oxide underneath these two at 800 degrees. Now, if I hadn't said anything about stress, you'd say, well, Deal-Grove would say I have 1,000 Angstrom oxides here, 1,000 here. I put them both in the furnace. You just use Deal-Grove, you would get the exact same thickness on both wafers. Deal-Grove doesn't tell you anything having to do with the history of how you grew the prior oxide layer. It just depends on-- remember, just the t sub i, the thickness, the initial thickness. It cares nothing about what temperature that oxide was grown at. In reality, though, if you go to look at the wafer that comes out of the furnace after this 800 degree C, these two wafers will have different thicknesses. And so how can that be? They're both in the same furnace. They're sitting right next to each other. They had the exact same thickness to start with. Why would it matter? Well, actually the wafer oxidized at high temperature is going to grow faster compared to the 800 degrees, because it has lower stress levels. So it has a little bit more open network. So you can diffuse through that oxide more rapidly, and it's a k sub s. Its surface reaction rate is going to be a little bit faster because it doesn't have the stress in it. So you can see these differences even if you're not doing shape surfaces, just in planar surfaces. Now we chose a very extreme example. 1,100 degrees, which is quite hot, above the flow point of the glass at 800, which is quite a bit different. Maybe you typically wouldn't do such a wide range, but it really does tell you that the history effects can be important when you're trying to do accurate modeling. And in fact, most simulators don't take this into account, so it's something you'd have to account for, either empirically in your process or calculate it yourself. That's sort of an interesting effect that goes beyond shape effects. So let's go on to slide 15, and we're going to talk a little bit about the evolution of what's called recess LOCOS. Before, earlier today I talked about LOCOS and we just said we put down a mask, we have a flat surface to begin with, and it becomes a little strangely shaped as we oxidize it. Remember, we had quite a bit of encroachment. Even 0.2 or 0.3 microns of encroachment is too much. And not only that. If you look back-- let's see. I'm going go back a couple slides to, say, slide 12. When I take the nitride off and I go to use this as a processing my chip, the surface is not very planar. So I have to do photolithography now on this surface. It's not very planar. I have this big oxide hump sticking up here in the field, and the field is much higher here. Look at the field. It's up here at 0.3 microns above the original surface. So I got this big mountain sticking up over in the field. Photolithography doesn't like to be done on surfaces that are rough or surfaces that have big steps. The depth of focus from your camera or whatever, when you're trying to focus down, the smaller the feature size you're trying to focus on, the depth of focus becomes more and more difficult. And so you the lithography tool might not be able to focus on this valley when it's well focused on the highest part of the mountain. So there's a big requirement with litho today, forcing us to make all our chips so we can keep the surfaces as flat as possible in every litho step. So a step like this that might have been acceptable, 0.3 micron step years ago when people were making large devices and the lithography tools didn't use really high power lenses, now we're using a high power lens, we cannot tolerate such a big step. So there's always a move in silicon ice industry to keep the surface as flat as you can. LOCOS doesn't do that. So before people invented shallow trench, STI-- which is, again, a very planar process because you use a polishing process to polish things down at the end-- they did something called recess LOCOS. So this was an attempt-- well, you say, well, you have this big mountain growing up on the side. Let's give it a handicap. Let's etch the silicon in the recess to begin with so that as the oxide grows, when it's finished, it'll end up being pretty flat. And so you try to compensate for that. And this is an example of how that works, and I'll show you it doesn't come out exactly as flat as you want. But let's say this is a starting structure. So I have nitride over here, little pad ox underneath, and I've etched this recess over on the left. The recess is maybe 0.3 microns deep, and I'm doing that so that the field oxide will be recessed and I'll get a flatter surface. And then I go after certain timesteps. So this is after I've grown a little bit of oxide in the center here, and you can already see, even for a thin oxide, immediately some corner effects. The oxidation rate is a little slower here and here compared to on the flat. And on the far right, you can see I'm starting to-- the corner effects are still evident and I'm starting to develop this bird's beak thing. The stress of a nitride is causing the oxidation rate here to be also a little slower, and the fact that it has to diffuse through. So let's go to the next time step. This is sort of a time evolution, which is easy to do in the computer. Here's the oxide again after a certain period. Once more. And finally, after 90 minutes, the structure is almost flat. So I achieved what I wanted in that in the flat regions out far away, in the field region here, the surface of the oxide is at zero or close to. Not zero, but it's at a height that's just about equal to the surface of the silicon. So that's flat. Once I strip the nitride off and etch it, I still have this little bump except for the bird's head. And if you look at this, you can see it looks like a bird's head and a bird's beak and the bird's eye would be right about there. So that we really can't get away from, and that's because the oxidant diffuses underneath that nitride, causes some oxidation, and the exact shape of the bird's head depends on, of course, what kind of stress model you use. But it's better than it was before. The lithography has a flat surface here, a flat here, but it's got this little bump here, which is not so easy to get rid of. So the next technology people develop is shown on slide 17. People thought of this other effect, other idea called the Swami process. This was done at Hewlett Packard, and this was even a little more sophisticated. They were trying to keep the occident from getting around. Remember, if we go back here-- let me go back for a second. The starting structure here, the problem is that at this corner, the oxygen can diffuse straight in and, right off the bat, start oxidizing, and that's going to form where the bird's eye or the bird's head ends up. So they thought, all right, let me put this nitride mask all the way down this the edge, and that'll keep the oxygen from going up there and reduce that amount of oxidation. So this was the Swami structures. They're very similar to recessed LOCOS. You create the recess, but they create an oxide mask here, and they also put a paddocks-- I'm sorry. A nitride mask along the edge of the recess, and even at the very bottom, at the foot. So trying to minimize the oxidation in the active device region by masking the etched sidewall. But they want to do this in a way that you don't build up too much stress, because after all, the oxidant will find its way over here and start growing in oxide. If the nitride is too thick here, it can build up too much stress. If you build up enough stress in the oxide, you can actually crack the silicon or introduce dislocations. They're really not cracks, they're dislocations, but that can be a problem. So you notice they did this in a way that the nitride is a little thinner along the sidewall and down here so that the flap can be lifted without too much stress being induced. So we're going to take this structure now, put it in the furnace at 1,000 degrees, in moisture or water vapor for quite a few minutes and see what happens as we oxidize it. So here on the left on slide 18 is the starting structure. And now look after 450 minutes of oxidation. This is the simulation. Unfortunately, I'm sorry about the colors, but I think you can see this top layer is the original nitride, silicon nitride. Look at the flap. The flap that was along this sidewall has been lifted up because the oxidant has moved its way under there and pushed it up, and then it's been-- it's quasi vertical over here. The flap has been pushed out by the oxidant that was coming under here from the left and pushing on it. So the thin flap lifts up during oxidation, but it's still reasonably flat surface. Not perfect, but they're trying to reduce the amount of bird's beaking and bird's head that formed. That's the simulation. And in fact, on the lower left, on slide 19, the upper diagrams are the same as what we just saw. But if we turn to slide 19 in the lower left, they have actually-- SUPREM-IV form can actually be used to calculate stress contours, so it's actually calculating the stress in the oxide of course at each point, and at each point, assigning an oxidation rate. That's how it knows how to calculate the shape. There is some stress buildup or concentration here here in this yellow region, so these different-- I don't have stress numbers associated with this, to be honest. But if you run SUPREM-IV, it'll tell you in each color what that corresponds to what stress level in the oxide. On the lower right, which is kind of interesting, this is an actual-- from the HP process, this is the actual experimental data. So they grew this. They made this Swami process, and then they did a polished cross-section, scanning an electron micrograph. And this is the act of silicon region afterwards. This region right here is the oxide and the nitride. And you can see if you compare the simulation upper right to the lower right, the actual data, it's not that far off. It reproduces some of the key features. You can see this little divot here, this little V shaped well. Well, that's also seen in the simulation. So the stress effects did a reasonably good job of getting the shape right here at the corner is not perfect, but it's close. The actual device shown here, the actual data seems to have a little bit more of a bird's head, then. Maybe the stress effects weren't perfectly modeled, but it's pretty darn close. People spend a lot of time calibrating their stress models and their stress coefficients to try to get close to what's measured experimentally. OK. Let's go on to slide 20. What I just talked about is specifically related to seamless processing devices, trying to make the active area. By definition, when you're doing that, you're doing a patterned oxidation, and you have stress effects. Here's a neat application of stress effects that has nothing to do with CMOS. For those of you who are interested in nanotech or nanostructures, on page 20, this is an application of stress oxidation effects to form nanowires. I took this from a PhD thesis of Harvey Lu at Stanford. He graduated in 1995. And what he was doing, and a lot of people since then have done the same thing, he was first etching a very small, tiny pillar in silicon. So a tall pillar. Say, a couple microns tall. But where the starting diameter of the pillar would maybe only be a couple hundred angstroms. Say 200 angstroms. So he starts with this little needle of silicon. And you can do a 200 Angstrom pillar or a 300 Angstrom diameter pillar with a very good electron beam lithography machine. At least in those days, that's about as small as people could get-- a couple hundred angstroms. You patterned the dot and then you etched the silicon into a nice pillar. And then he took that pillar, those pillars, and he put it in the furnace and oxidized it. And the idea was he wanted to get the pillar diameter down. He could not lithographically make the pillar any smaller, but he wanted to look at quantum effects, current conduction, in this very thin wire. He wanted to get it thinner somehow. And an obvious way to make it thinner is oxidize it, right? As you oxidize, you will consume from the top and from all the way around the sides of the little cylinder, you will consume the silicon by the SiO2 production. And then you can strip the oxide at the end if you want and make electrical measurement which turns out not to be so easy to contact a pillar that's only 20 angstroms. A little minor technical problem, but he was able to do it. And in fact, this is a cross-section TEM, a transmission electron micrograph, of one of Harvey's pillars. And what you can see here is two nanometers in diameter. So it's 20 angstroms. So it's a really tiny wire of silicon. And this amorphous looking material all the way around it on the sides is the silicon dioxide that he grew. This particular one was oxidized in dry ox at 875 for about 10 hours. And you can actually see the lattice fringes in the single crystal silicon that remained. What was really neat is what Harvey found is, depending on the temperature that he did this, he could oxidize that pillar and it would stop. The oxidation rate would go way down when he got down to a certain diameter pillar. And so that's quite convenient because then you could use this to form reasonably-- let's say lithography is not perfect, right? Some of the dots came out when he patterned at 200 angstroms, 250, maybe, 210. Something like that. Litho is not perfect. But he could get a reasonably wide range in the starting pillar diameter. The final pillar diameter, if he oxidized it long enough, was pretty darn uniform. Which was kind of nice, because you have a process now that's physically slowing down as it gets smaller. And so they all end up-- even though you have a large distribution to begin with, they all end up very close to a single diameter. If you're trying to make some device or some interesting quantum thing, that was a very nice side effect. It's a side effect of, he believed, the stress effect. That there was stress built up in the silicon and in the oxide in such a way that the oxidation rate, as the pillar got thinner and thinner, the oxidation rate would go down, and it ended up being somewhat self limiting at a certain diameter pillar. It's not exactly the same kind of effect that's going on perfectly as LOCOS, but it's related. It gives you an idea that you can use these stress effects in processes to create structures that you wouldn't be able to otherwise create. If he didn't have this effect, he'd have a very good chance of really ending up with very few pillars that were of similar diameter. OK. So just as a point to make. And if you're working on quantum devices, you'll see a lot of work where people do shaped oxidation. They don't just make pillars. They make small structures with electron beam. They pattern a silicon on insulator layer, and they take advantage of corner effects to create funny looking shaped structures that are used for quantum devices. So shaped oxidation stress effects are used all the time today in research other than just people doing CMOS devices. So what I want to go on to beyond the stress is to talk about now a more atomistic or microscopic or point defect based model. So far we've been using macroscopic models. We write chemical equations. We talk about diffusion coefficients. We're not really looking at an atomic scale of what's going on at the interface or in the oxide. There's also a very atomistic picture of the process that turns out to be very important. It's important because it explains certain physical phenomena that are critical to fabrication such as oxidation enhanced diffusion, oxidation retarded diffusion, and these things that tell us that the oxidation process itself dramatically changes the point defect distribution in the near surface region. Most of the ideas are related to the fact that there is volume expansion going on and the need for free volume at that interface. And so point defects play a role in that volume expansion. So let's go on to slide number 22. This is a kind of ugly looking equation. I don't really like to use it, so you don't have to look at every single term and understand it perfectly. But it sort of shows qualitatively what we're talking about. This is an equation we talked about in chapter 3 when we were discussing SiO2 precipitation and using gettering. But the same equation essentially applies here when we're oxidizing a silicon surface. And what it tells us is that we have a certain number of silicon lattice sites, the silicon sub SI, combining with a certain number of oxygen species and combining with a certain number of vacancies. Notice that vacancies are being consumed in this process, forming SiO2. Vacancies at the surface are being consumed, presumably, because we need space for that oxide to move, to push around a little bit. So consuming a vacancy will help you find extra space, essentially, at the interface. So I consume a vacancy on the left. I form SiO2, and I actually form-- on the right side I also create these interstitials, a certain number of interstitials. Plus stress, of course. We talked about stress. So this oxidation reaction happening right here at the borderline between the pink and the white region here at star, they can consume vacancies or they can generate interstitials. So I've got a vacancy going in, interstitial going out either way in order to provide the volume needed for the reaction. So we need room to put these oxygen atoms at that interface. And that room comes from vacancies and also picking out silicon atoms from the lattice, creating interstitials. So let's go on to slide 23. This generation of interstitials is extremely important because it is used to explain the non-local effects of oxidation, such as oxidation enhanced diffusion, which we'll show, or oxidation retarded diffusion. That's because the interstitial can diffuse very far from the interface and change the diffusivity of dopants. So this shows how these models can actually be used. At this interface, about 1 in 1,000 of the oxidized silicon atoms is injected into the bulk. So for every thousand atoms that I oxidize, one atom goes into the bulk that wouldn't be there ordinarily if I were not oxidizing. So it doesn't sound like a lot. Only one atom out of 1,000 goes into the bulk of the ones out of the surface. OK, that doesn't sound like a lot. But their impact is huge in oxidation enhanced diffusion, oxidation retarded diffusion, and the growth of stacking faults. So it's a very important effect. And on slide 24, we have a picture from your text, a very schematic picture of what's going on when we do oxidation. So this is a LOCOS structure you recognize by now. Here's my nitride on the left. There's no oxidation taking place on the left. I am actively oxidizing on the right. You can see this thick layer of oxide that I've grown. At the surface here there are a number of processes taking place. This G is meant to represent the generation of interstitials or the injection. It's a certain rate of generation of interstitials that get injected and go into the bulk. Here they are. Here's an interstitial. There's also a flux of interstitials from the bulk to the surface, and they can recombine. So interstitials can be generated during oxidation. They can also find their way to the surface and they can recombine there. The interstitials can recombine in the bulk with vacancy. They can find a vacancy and go away. They can diffuse over to this surface, which is a non-oxidizing or inert place in recombine, or they can go into the bulk. And here, here's a buried dopant marker layer, which started out looking like this dark green. It was very narrow. And lo and behold, underneath where you're doing the oxidation, the boron diffuses a lot, so it becomes very wide. So they come in here and they disturb the boron diffusion. They enhance it. And so the junction here, the width of this boron profile is now quite wide underneath the oxide where it was growing. Under the nitride where there was no oxidation taking place I have inert diffusion. There's very little broad motion. And these stacking faults, which are in the substrate. Let's say we have a way of introducing stacking faults. They are found to grow. They actually get longer under the region where the oxidation is. So these are all indirect means that people use to understand what's going on in the point defects in this model. So let's go on to page 25. I mentioned about the stacking fault. I don't want to go through detailed crystallography, but stacking faults occur when you have an extra layer of atoms. So if I look here on the lower left, it has a certain stacking sequence. I insert this extra layer of atoms. It has a certain layer and a certain length in the crystal right at this point. So it's not a complete layer that goes all the way throughout the crystal, but it doesn't-- so it doesn't belong there. But I can grow this fault. I can make it longer within the crystal by adding extra silicon interstitial atoms, people believe, on either edge of it. So stacking faults, people use their growth rate as a measure of the injection rate or the injection of silicon interstitials if they grow or shrink. On slide 26 here's some actual photographs or optical micrographs. The surfaces were etched. Remember, we talked about defect etching. You can put it in a solution of acid that will preferentially etch where the crystal is imperfect. And in fact, these are stacking faults. There are pictures of them. They have a certain size here. These are all-- scale bar up here is 10 microns is shown up here. After 10, 20, 30, 40, 50 all the way down to 60 minutes, these are all at the same magnification. These stacking faults, these crystal defects have actually grown quite a bit at 1,200 degrees. This is during oxidation, during high temperature oxidation at the surface. These were down under in the bulk, so people believe that that oxidation was injecting interstitials and causing these defects to grow. In fact, we go on to slide 27, this is some data from the literature on the stacking fault length here on the y-axis as a function of oxidation time at different temperatures. Here 1,200 is up at the top. This is 1,050. And so you can see the stacking faults increases, this length increases with time of oxidation, and the growth rate is faster for higher temperatures. So if we go here, 1,050, 1,100, 1,150, the growth rate is faster. Of course, the oxidation rate is going up as well. So this gave people maybe a hint that the faster you oxidize, the more interstitials that you're injecting. The oxidation induced stacking fault growth rate is smaller for lower partial pressures of oxygen. So again, slower oxidation rates. This is an idea that the rate of oxidation determines the rate of injection of interstitials into the bulk. So let's go back. Let's go on to slide 28. This is the diagram I just went through. These are the different processes we talked about that contribute to this oxidation enhanced diffusion. So it's this generation at the interface where I'm oxidizing and the recombination, the balance of G and R, generation and recombination, determine what your net flux of interstitials is into the bulk away from the surface. It's this generation rate G that's directly proportional to the oxidation rate. So if I oxidize faster at a given temperature by upping the partial pressure, I will generate a larger rate of injection of interstitials and I'll get more boron diffusion in the Berry layer. And by the way, this Berry layer can be 2, 3, 4 microns away. It doesn't have to be close. The interstitials diffuse very quickly and very rapidly, so this is the problem. Oxidation can have effects at a distance. A micron or so away can greatly impact what's going on in the chip. So they diffuse away from the interface, and they enhance the boron diffusion rate. Here's an actual simulation of oxidation enhanced diffusion. What happened was the surface started out by being implanted uniformly with a certain dose of boron. So I have a very thin layer of boron up near the surface, and then we grow a paddocks and we grow a nitride layer on the left. So on the left there is going to be no oxidation taking place. We put it in the furnace at 850 for an hour, and you see an oxide has grown on the right, not on the left, which makes sense. But look at the boron. The junction depth for the boron is much deeper here. You can get the junction depth, say, from, let's say, the region between the green contour and the blue. The junction depth is a lot deeper underneath the place where the oxide grew. So this tells us away that we are getting oxidation enhanced diffusion of that boron. This is something that's been simulated. Doping effects on oxidation. So besides the oxidation injecting interstitials, people also noticed that at low temperatures, depending on the wafer doping, they would get a different oxidation rate. So this is kind of interesting. If you look at A, A is a very light-- so I'm plotting oxide thickness versus time. A is lightly doped silicon, kind of the 15th. F is 10 to the 20 or 3 times the 20. Much more heavily doped with n type, with phosphorus. So especially at low temperature-- at high temperature, not so much of a dispersion, but at low temperature, a large dispersion. So this needs to be accounted for in our models because depending on you have lots of different doping concentrations on the surface of a wafer when you're making a chip, you'll get different oxide thicknesses. You need to take this into account. People, in fact, not only found the thickness to be different. They extracted the B and the B over A parameter. So as a function of boron doping-- so this is the oxidation rate, or these rate constants B over A and B as a function of boron concentration. Oh, I'm sorry. It's actually phosphorus doping in this reference. The B over A parameter is the one impacted. So above a certain concentration of dopants, here above about 10 of the 19, B over A takes off. So it's going up quite rapidly. The diffusion through the oxide doesn't seem to be affected. You can kind of imagine that I've got doping at the interface. You expect there's some effect of that doping on the reaction rate. And in fact, if we to slide 32, people had an idea of what it might be. Maybe it's a chemical effect of the dopant, perhaps, but people liked this model better. People liked the idea that the vacancies available in the near surface region are important to the oxidation process. So anything that caused vacancy concentration to go up might cause the oxidation rate to also go up. But what do we know about highly doped regions from your homework that you did? When you increase the doping in a certain region, the total number of vacancies go up, right? Because you move the Fermi level and you create charge vacancies and you count up all the vacancies and they go up. So remember these equations you used on your homework that you're handing in today. If I'm intrinsic, I'm here. My Fermi level is at mid-gap. I'm dominated by neutral vacancies. They have a certain concentration. 10 to the 13 or whatever. As I move the Fermi level up so I make it more and more heavily doped, I create a lot of v minus and v double minus. So I could have a hundred times the number of vacancies at the surface. Oxidation needs vacancies to take place. That's the hypothesis. So people explain the enhancement of the high doped oxidation case due to extra vacancies that are around at the surface. That's one potential explanation. And in fact, I just reminded, you did this in your homework. You did some calculations on slide 33. The vacancy concentration that's been calculated for you here as a function of doping concentration, look where it takes off. Of course, somewhere right around 10 to the 19 it really starts to take off. That's exactly where Hogh found that the B over A coefficient took off is right where the vacancy concentration is going up as a function of doping. So let's just say on slide 34 that these vacancies are available to provide sites, these extra vacancies, for the oxidation reaction. Then we can imagine writing B over A equal to something, some number R1 where R1 has to do with all the mechanisms other than vacancy driven processes. Obviously oxidation is not only just controlled by vacancies. It's controlled by a lot of things. But there must be some term added in this k times V total, CV total. So the term on the right represents the vacancy driven process. It's, I believe, to be directly proportional to the concentration of the total number of vacancies in all charge states. So doing that, you can then rewrite the B over A parameter to be B over A in intrinsic material, lightly doped, when you're not extrinsic, times this 1 plus some exponential type of factor-- so this is empirically determined here as a function on temperature-- times something that goes like CV over CVI. So the concentration of vacancies in this heavily doped material divided by that intrinsic material minus one. So if I'm intrinsic, this goes to zero and this term goes away. As this increases, this ratio goes up above one. This term starts to kick in. Interestingly, though, when we think about this now, this model for the vacancy dependence is only going to depend on the electrically active concentration. Because remember, we said we move the Fermi level up. So if we add phosphorus to the wafer, it's only the amount of phosphorus that contributes electrons that's going to move the Fermi level around. So people had an idea on how to test this. What if I compensate the wafer? OK. Well, actually let me go on. I'll talk about that next. Let me just show slide 35 just to show, compare boron and phosphorus. We know from our calculations, actually, that n type regions, just because of the point defect statistics, n type regions tend to have higher charge vacancy concentrations at a given doping than p type, than boron. And in fact, what we see is that the B over A parameter is more impacted in n type silicon, very heavily doped, than it is in p type. That's the difference between the right and the left hand side on page 35. That's some hint. What I started to talk about, which I think to me is a little more convincing, that this is really maybe is due to vacancies. If you look at the oxide thickness as a function of the electrically active phosphorus-- and this solid line is for uncompensated. What is uncompensated? I just add phosphorus at certain level. Here I add it to 10 of the 19, 5 to the 19, 10 of the 20, and 5, 10 of the 20. And I don't add any other dopants. No boron. As the phosphorus goes up, the vacancy concentration goes up and so does the thickness. Now the dashed line is material that's been compensated. What does that mean? Well, what that means is when I add 10 of the 20 phosphorus, I add 10 of the 20 boron. There's still 10 of the 20 phosphorous there and 10 of the 20 boron, but electrically I haven't added any extra electrons or holes because they compensate each other in terms of dopants. So the electrically active concentration has not gone up when I add as much boron as I add phosphorus. You add equal amount of p and n type doping. I still end up with fairly-- I end up with material that's compensated. I haven't moved the Fermi level. I haven't created a lot of excess charge defects. And in fact, he found that there wasn't that much increase in the oxidation rate when he added a lot of phosphorus but compensated with boron. Equal amount of boron. So if it were a chemical effect, just the amount of phosphorus atoms-- let's say a phosphorus atom chemically causes oxide to grow faster. It wouldn't matter whether you added an equal amount of boron. But if you add an equal amount of boron, you don't change the Fermi level. So this is an argument that says to me that it really is the electrical concentration that's changing. It's the movement of the Fermi level. It doesn't prove it's a vacancy effect, but it gives a little bit more weight. Here on slide 37, it's just a simulation of this type of effect. What we have on the left hand side is local oxidation where it's taking place here at 800 degrees for 30 minutes and you're seeing get a little thicker oxide or get a five times thicker oxide growing in this heavily doped region. So this region over here on the right is heavily phosphorus doped. So it was I implanted prior to oxidation. So you need to take this into account. If you're making a device and you implant a region for a source drain and then you go to oxidize it, it's going to have a different oxidation rate compared to those parts of the chip that are being oxidized that don't have high doping in them. So in SUPREM you can easily take that into account. And the last thing I want to cover is here on slide 38, we talked about oxidation. It has all these point defect effects. We also talked a couple last time or the time before about interface charges that come along with oxidation. And this model on slide 38, is while we're talking about atomic level things, this is an atomic level cartoon or picture of what is actually causing the fixed charge and the density of interface states. And what it supposedly is here, in the near-- here's a miamorphous SiO2 up here and my crystal silicon. In the near interfacial region, there are believed to be a few interstitial silicon atoms. Remember, some interstitials get injected into the bulk. Some go up, out. There's a few that get stuck near the interface. This little extra silicon interstitial with a positive charge on it. It's not bonded to oxygen atoms, so has an excess positive charge. That unbonded silicon interstitial, the extra dangling silicon interstitials that are there that are positively charged are believed to be responsible for QM. Remember, fixed charge doesn't change with bias. The density of interface states that change is believed to be associated with dangling silicone bonds right at the interface, and we can passivate these because they're not in the oxide. They're right at the silicon dioxide surface. We can passivate these by adding hydrogen. So when you build a CMOS or you build a MOSFET, the last step is 450 degrees in hydrogen ambient. The hydrogen diffuses in. It satisfies these dangling bonds. It bonds here and it covers up that charge. The hydrogen can't do anything about this silicon. It's unoxidized. It needs oxygen atoms. So that's why when you do a forming gas anneal here, the last step, you can reduce the density of interface states. The fixed charge is a little different. To reduce it you don't anneal on hydrogen. This is so-called Deal triangle. What you do is you anneal the oxide after oxidizing it in an inert ambient at high temperature. So the idea would be that you try to get some of those silicon interstitials that were stuck there. They're unoxidized. You give them time to diffuse away from that interface so you can get rid of that fixed charge. That's exactly what you will find. If you run an oxidation program in the furnace, what people typically do, they grow an oxide, 800 or 900 for an hour. The last step of the program is to let the wafer sit at 800 or 900 in the tube with no oxygen flowing. Just argon or nitrogen. And that last half hour anneal is the QF anneal. So if you grew in dry oxygen at 800, an oxide, and you didn't anneal it, you might have on 111 silicon about t 6 times to the 11 for your QF per square centimeter. If you anneal it, you can get it down at 800. If you anneal it in nitrogen or argon, you can get it down to about one times 10 to the 11 on 111 silicon. 100. All the numbers will be about three times lower. If you anneal it at a higher temperature, it'll also come down. So the whole idea is to anneal that in inert ambient at a relatively high temperature right after your oxidation. OK. This is the last slide. Just to summarize chapter 6 and what we've talked about today. We know have to control the oxide thickness to atomic level for a thin gate dielectrics. There's a basic Deal-Grove mechanism which involves diffusion through the oxide, reaction at the interface. This doesn't work perfectly, but there's been a lot of situations that people can correct the Deal-Grove model. This non-planar oxidation is impacted by the orientation of the surface, two dimensional diffusion through a shaped oxide, and the stress effects. Stress retards or slows down the oxidation by slowing down the interface reaction k sub a and slowing down the diffusion through the oxide. And these silicon interstitials are injected into the bulk during oxidation, and we're going to come back and revisit them in the next few lectures or over the next several weeks because they're going to dramatically impact diffusion of boron and other dopants. So that's about what I have. I believe, for today, your homework is due. You can bring that up right now and put it in this orange folder. If you didn't pick up your homework last time, it's in the back with the handouts. I'm going to be out next week. Your TA, Maggie-- Maggie, you want to raise your hand in the back? Is going to give lectures as usual. So we'll start diffusion on Tuesday. And on Tuesday, then your new homework goes out. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 8_Dopant_Diffusion_Need_for_Abrupt_Profiles_Ficks_Laws_Simple_Analytic.txt | JUDY HOYT: We're going to begin this lecture on handout number 14. We'll be moving now to chapter 7. This will be our first lecture on chapter 7 on the topic of dopant diffusion and profile measurement. So far, we've discussed a number of major topics, including the fabrication of wafers themselves and cleaning, point defects in silicon. And the last couple of lectures, we've been talking about the details of silicon thermal oxidation, including two-dimensional and stress effects. During the next few lectures, including this one, we're going to discuss the accurate control and placement of active dopant regions through a process called dopant diffusion. Today I'm going to give an overall introduction to diffusion in silicon from chapter 7. So let's go on to slide number 2. And here I'd like to give an introduction to the basic concepts of why we care about the details of diffusion in silicon. And what's being shown here is obviously a silicon MOSFET. We see the source, drain, and gate regions. And each one of these regions has a certain resistance associated with it. And that resistance turns out to be dramatically dependent on the placement of the atoms themselves and of the doped regions. And not only that, but the placement of those regions determines many of the so-called short channel characteristics of MOSFETs that we'll talk about. So as the device shrinks by some scale factor, say k-- we make some dimensions smaller-- the junction depths should also scale by k to maintain the same electric field patterns in the lateral and vertical dimensions. And so that's an important region the reason we need to control the doping profiles. And finally, the doping of other materials, not just the silicon itself, but of the polysilicon gate affects things like gate depletion and limits how well the gate voltage controls the channel potential. So we really need to understand the placement of these atoms. Let's go on to slide 3. Since we just talked about the idea of the resistance of these different regions, I just want to remind people what the general form for how one calculates the resistance of either a bar or a sheet. We know that the resistance in ohms can be calculated by the product of rho, the resistivity of the material, which has the units of ohm centimeters, times the length of the resistor bar divided by the cross-sectional area through which the current flows. So the resistance in rho L over a. Shown on the left is a picture of a cube. So let's say we have a cube, and we can calculate its resistance as just being the resistive times the length of the cube divided by this cross-sectional area. This resistivity, where the resistivity of the cube is given by, essentially, the electric field divided by the current density. So now, if we look on the right, and instead of having a cube-- in semiconductors or in silicon, we typically don't have a cube or a chunk of material-- we're usually measuring the resistance of a thin sheet in the near surface region. And that sheet generally has dimensions of L length and width that are much larger than its thickness. So if we have a shallow junction, as pictured on the right, we can calculate its resistance as follows, again, using the same formula at the top of the page. It's the resistivity in ohm centimeters times the length now divided by the area. But now we're assuming here that we have a square sheet so that the length and the width of the square are equal-- in this case, equal to W. So in that case, the W's cancel out because we have the length equal to W's. And when we divide by the cross-sectional area through which the current flows, it's equal to W times xj. The W's cancel out. So we end up at the resistance. The sheet resistance is just given by rho over the resistivity over xj. So that's a simple way and a convenient way of calculating resistance of various structures in semiconductor devices. So let's go on to slide number 4. What we just talked about was for a sheet that was uniformly doped to a given doping concentration. If the doping concentration throughout the sheet is non-uniformly doped in depth, then we can still calculate the sheet resistance, but we need to do an integral. It's equal to-- the sheet resistance is just rho over xj. But at this point, we need to integrate. So we integrate 1 over the resistivity. So it's 1 over the integral of the carrier concentration N minus the background doping concentration NB-- so that's just the net doping concentration-- times the mobility, which is generally a function of the doping concentration dx. We integrate that over from 0 to xj. You can do that numerically, essentially, for an arbitrary doping profile. The equation has actually already been integrated numerically for certain special analytic profiles, and we'll talk about those results later. So that's basically how we get from the doping profile to the electrical properties such as the sheet resistance or the resistance of a region. Experimentally, we measure the sheet resistance using a four-point probe setup-- I think we discussed this a couple of lectures ago-- or a Van der Pauw structure, as discussed in your text. Let's go on to slide number 5. Here, again, I'm picturing that same MOSFET structure with the resistance of the different regions. And in general, as a rule of thumb, of course, we'd like the resistance of the regions that are extrinsic to the device, such as the contact resistance, the source and drain resistance, and the resistance of these extension regions, hopefully should be no more than about 10% of the channel resistance. That is, we'd like to have the intrinsic resistance of the channel dominate the overall resistance of the device because that's what the gate has control over. So if we apply this 10% criterion, you can write the equation that's shown on slide 5 as follows. That 2 times the contact resistance-- so that's the resistance of the metal contact-- plus the resistance of the source region, as shown here, plus the resistance of the drain, plus 2 times the resistance of the source drain extensions. Remember, these shallow extensions are the shallow xj regions that attach the source and drains, essentially, to the channel region. We'd like the sum of those terms to be less than or equal to 1/10 the channel resistance. So, in general, as we scale devices, the channel length becomes shorter and the channel resistance goes down. So we similarly need to scale down these extrinsic resistances of the source drain and the extension regions. So to reduce those resistances-- those parasitic regions-- we would like to increase, in general, the junction depth xj. However, there's a problem that if we make the junctions deeper, it will make it easier for voltages at the drain to affect the current flow in the channel because of the way the field patterns are established. So this two-dimensional spreading of the electric field from the drain can attract carriers from the source, even when the device is supposedly in the off state. So we end up with something called drain-induced barrier lowering if the junctions are too deep. So this results in, for device designers, kind of a fundamental design tradeoff for MOSFETs that is a design trade off between the series resistance versus the DIBL or the ability of the gate to control the current and turn the device on and off. So if we go on to slide number 6, essentially what we're saying here is that there is a major challenge that we need to keep the junctions shallow so that DIBL and short-channel effects are reduced as we scale. But at the same time, we need to keep the resistance of the source-drain region small so that we can maximize the current drive get the maximum amount of current out of the device. And these are conflicting requirements. And you can see the effect of these conflicting requirements to a certain extent by examining this chart. This is a chart that I took out of the text. It's a little bit dated in the sense it's from the 1997 Technology Roadmap for Semiconductors. But since it's consistent with what's in your text, we'll go ahead and look at it. If you look at, say, the last three rows in the chart, you'll see the contact region junction depth contact xj from, say, the year 2000 being in the range of 50 to 100 nanometers. And as we go smaller and smaller channel lengths out further in time, that's scaling down to shallower and shallower junction depths at the channel. So right near the channel or in the source-drain extension regions, you can see that's even much thinner, and that also scales down with time to thicknesses by 2009 on the order of 15 to 30 nanometers-- quite shallow. And at the same time as we're scaling these junctions to maintain the good electrostatic control, the drain extension concentration-- so that's the doping concentration in the extension regions-- is going up dramatically, say, from 10 to the 19th up to 10 to the 20 or perhaps even higher. And that higher doping requirement is arising from the fact that we're making the sheet shallower. The junction depth is smaller to compensate for that and have the resistance go down. We really need to up the dopant concentration. And there's a fundamental physical limit on how much dopant we can put in the silicon and how much it will be electrically active. So this is becoming a real problem. We need to find new ways to activate dopants to higher levels if we're going to be able to manage this design tradeoff. So let's go on to slide 7, again, which is that same picture, to remind you. So, basically, the ITRS requirements in the future really require or dictate that we know the dopant positions in the device with almost atomic-scale accuracy in both two-dimensional and three-dimensional profiles. So being able to scale the device really amounts to, in the front-end processing to a large extent, to being able to control very precisely the shape of the doping profiles where the dopants end up. And what I'm going to spend some time in the next few slides is giving you examples from the present literature on device scaling, which emphasize or give you some sense of how these doping profiles need to be controlled and what their impact is on device performance. So, again, perhaps you won't understand the detailed device physics, but it's just to give you a flavor for why studying dopant diffusion is such an important topic. So let's go on to slide number 8 and talk about a topic called the short-channel effect. And this basically takes place when the distance between the source and drain-- that is the channel length L-- becomes comparable to the MOS depletion width in the vertical direction. And then that the source-drain potentials themselves from the source and drain regions end up having a strong effect on the control of the current in the device. So in words, that's a way of expressing the short-channel effect. And what I'm showing the top equation on slide number 8 is the equation we looked at briefly-- we didn't derive. We just wrote it down in class several lectures ago-- for the threshold voltage. And this is for the threshold voltage in a MOSFET that has a constant doping in the channel-- a very simple profile. It's just constant. And it's a relatively long channel device. So you will not see this short-channel effect. The gate length L is much, much longer than the depletion widths. And we said we can write down these three terms, roughly, to calculate the threshold voltage. And remember, the threshold voltage is an important parameter as far as the device and circuit designers are concerned. Now, when we get to the short-channel case, which is shown below, that threshold voltage equation has to be modified to a certain extent. And, in fact, the third term, which is represented on the second equation by the bulk charge QB prime over WLC ox. That term ends up being smaller than it would be in the long-channel case. And this QB prime is smaller, and that ends up affecting-- that third term being smaller affects the Vt. Threshold voltage actually goes down. And this diagram on the lower left is an explanation that people have proposed to explain this. And it's essentially because charge sharing-- that some of the charge under the gate is balanced by the charge from the source-drain regions. So that effectively gives you an effective channel length. The L prime is actually shorter than the actual gate length. So there's this tendency as we scale devices for the threshold voltage itself to drop or to roll off. And if you want to learn more about this, I've got a reference on the bottom. There are a number of books, but this is one by Taur and Ning where you can look at this so-called Vt roll-off effect. If we just go on to slide number 9, you can actually see some data on threshold voltage roll-off. The upper plot is for nMOSFETs. And you can see on the vertical axis is the threshold voltage Vt as a function of channel length. And these curves-- one is for in the linear regime with a low drain-to-source bias. The triangle is for the saturation regime with a source-drain bias of 3 volts. And you can see, indeed, the threshold voltage is dropping or rolling off going towards 0 as we decrease the channel length-- same trend for the pMOSFET. And that's an effect that needs to be controlled as we scale the devices. And the way we partly control that is by controlling the dopant profiles underneath the gate. So if we go on to slide number 10, it's discussing something called channel doping profile engineering. Essentially, this refers to optimizing the final doping profiles in of the p-type regions in the nMOS that is under the gate or the n-type regions under the gate in the pMOS in the channel region. So this is just a schematic cartoon picture of cross-section of a MOSFET. And this channel doping profile engineering is referring to these red regions-- both these dark red so-called halo regions that are marked here that go around the perimeter of the source-drain extensions and this lighter red retrograde well regions. These two-dimensional dopant profiles are engineered or designed to minimize these short-channel effects. So let's go on to page 11 and give you an example on page 11 taking in depth-- going right through the center of the channel-- the profile concentration of the channel of the doping as a function of depth. And there are two different doping profiles that are shown here. One is for a uniform well-- uniformly doped phosphorus. It's reasonably constant doping around the [? 2a17 ?] at the surface. And the other-- the blue line-- is the so-called super steep retrograde well, where you have certain well doping. And then near the surface, the profile falls off in doping very rapidly. These two profiles, as it turns out, in the channel region give very closely the same threshold voltage for a given device. But you get better leakage current control. So you can scale the device to a smaller L effective. So, again, this is an example of how you need to control the doping profiles in order to optimize the device design. Let's go on now to slide number 12. And this is sort of an extreme case of scaling, where we're trying to scale the MOSFET gate length down to 25 nanometers dimensions. And in order to do this, if you look back in the literature, there's a paper by Taur, Wann and Frank in 1998-- the so-called the super-halo profile. And what the super-halo profile is is a fairly sophisticated ion-implanted and then diffused profile, say, for an nMOSFET of boron doping that creates a fairly complicated two-dimensional boron doping profile. And what you're looking at here is the MOSFET. And in the central region here underneath the channel, you see these p-type doping contours. It looks sort of like butterfly shaped. There's one contour here shown that around a doping of 10 to the 19th and another contour shown at about 5 10 the 18th. And you can see this is designed to put reasonably high boron doping against the source-drain regions to help stand off the field in this from the source-drain regions. And so that's refers to this halo design-- very sophisticated. And there's also the doping of the source and drain regions themselves designed to be quite abrupt in both the vertical and the lateral dimensions. So what exactly are the effect of these doping profiles on electrical performance? Let's go on to slide number 13, which shows the short-channel threshold voltage roll-off and basically how the Vt varies with channel length. So on the y-axis is the threshold voltage of the device. The x-axis is the channel length in nanometers. And there's a couple of different designs here that are shown. The dashed line with the open squares refers to the retrograde. So that's not the super halo, but a more classical super steep retrograde profile. You can see it's threshold voltage rolls off quite rapidly as we shrink the channel length below 30 nanometers. So that's not going to work. But the super-halo profile shown by the diamonds and the stars have a much flatter, nearly flat short channel and Vt roll-off characteristics. And furthermore, the Vt roll-off is not that sensitive to the vertical junction depth, as you can see by comparing the diamonds to the stars. So this lower variation of the Vt with L effective or with channel length allows a larger design window, which we need because there's always going to be some process variations in the channel length across the wafer. And this enables the technologists to push the channel length down to smaller dimensions. So it's not so much a fundamental improvement in device performance, but it really enables you to manufacture circuits with these shorter channel devices. And again, a lot of it boils down to controlling and detailed doping profiles in the source train and under in the channel regions. Let's take a look at slide 14, again, is another example of the fact that not only do we care about the doping in the vertical direction, of course, but the dopant profiles for the source-grain dopants themselves in the lateral direction are very important because it's not just the junction depth, but it's the lateral gradient of the source-grain doping. So this is a plot of the threshold voltage versus channel length again. But this time, the different curves refer to different lateral source-drain gradients. So you've got the top curve refers to a lateral gradient-- so how quickly the arsenic doping profile rolls off at about 2 nanometers per decade. We've got a curve at 4, 8 and 16. And you can see that for lateral gradients larger than about 4 nanometers per decade, the Vt roll-off is just too large. The threshold voltage is approaching 0. You wouldn't be able to make a 25 nanometer MOSFET-- so, again, illustrates the importance of controlling the lateral doping profile and of controlling diffusion processes themselves. So given that brief introduction to the electrical effects, let me go on now on slide number 15 and talk about dopant diffusion fundamentals. So I've tried to make the point that understanding these profiles in detail is important. And that's what we're going to do in the next several lectures. So what is diffusion? Diffusion is really the redistribution of atoms from regions where they exist in high concentration to regions of low concentration. Diffusion occurs essentially at all temperatures. But the diffusivity-- or the diffusion coefficient-- has an exponential dependence on temperature. So above a certain temperature is when the diffusion rates really become large. In silicon IC processing, there are two different steps that we refer to in diffusion historically. The first step was so-called predeposition. And what this refers to is that you had an initial step in which the dopants were introduced into the silicon wafer with a required integrated dose into the substrate. Originally, in the early days, this predeposition step to introduce the dopants was done by diffusion of the dopants into the wafer from a doped glass or by introducing into the silicon by heating in a doped gas ambient. The pre-dep is rarely done these days in that particular way. In the more modern technology, it's usually done by ion implantation, which is a process that we're going to discuss later and is covered in detail in chapter 8. Let's go on to slide number 16, which very pictorially illustrates the predeposition and the drive-in process. The second process in creating a region of the wafer with a certain doping is what is typically referred to as the drive-in. This is a subsequent anneal after the pre-dep that then diffuses and redistributes the dopant, giving the required junction depth that you need to get the right resistance and giving you the right profile or surface concentration, hopefully. So, again, schematically we have two processes going on here. The first one would be the ion implant step or the pre-dep, which would result in the bright orange or the red region. And that has introduced a controlled integrated dose of atoms per square centimeter into the silicon. And then without introducing any additional atoms but keeping the dose constant, we then diffuse in-- or drive in-- that profile. And that would then give us the final junction depth represented by the lighter orange region. So let's go on now to slide number 17. And here I'm comparing somewhat qualitatively these two different methods of doing predeposition. On the left hand column, we're talking about some of the characteristics of doing ion implantation of the atoms. And on the right hand column we're talking about doing this more old-fashioned solid or gas phase in diffusion in order to do the pre-dep. Now, so ion implantation-- What are some of its advantages? Which we'll see when we talk about in chapter 8. It's done at room temperature, essentially, so you can mask it with simple materials like photoresist. It gives you, probably most importantly, a very precise control of the number of atoms that are introduced per square centimeter into the substrate. It also gives you very accurate depth control. So those are key advantages. The problems with it, of course, is as the implant process occurs at high energies, it actually damages the crystal to a certain extent. And we have to heal this by an annealing process. But unfortunately, the damage itself can enhance the diffusion rate. And we're going to spend some time in this course talking about the transient-enhanced diffusion. The dislocations or extended defects associated with the damage can lead to junction leakage, which is not desirable. And you may have some channeling of the implant. We'll talk about how that affects the profiles in chapter 8. The advantages of the pre-dep by gas phase is there's no damage created by this process. And it can be done in batches. But it has some serious limitations. Usually, you are limited to introducing the dopant at the surface at a high concentration of the solid solubility. It's very hard to achieve low surface concentrations without a long drive-in step. And so it's hard to control the shape of the profile to certain types of shapes. And low-dose predepositions are very difficult, and that's a major limiter. So, as I said, except in very special cases, people typically use ion implantation for the predeposition step. Let's go on to slide number 18. Again, if we are talking about predeposition, in that case the dopants are typically introduced at their solid solubility limit. So you have some atmosphere of gas, say, that you introduce the wafers at high temperature into this gas atmosphere. And at the surface, you would end up with the solid solubility of that dopant. And just to give you an example here of what some of the solubilities are showing in this plot, solid solubility as a function of temperature. And say, if you are doping something with boron, for example, and you're heating away for up to 1,000 degrees, the solid solubility is somewhere in the range of 2 to 3 times 10 to the 20. So they are soluble in bulk silicon up to that value. And above that, presumably, they start to precipitate out into another phase. So it gives you an idea of the surface concentration you might get of these dopants if you did a predeposition at that temperature. Now, if you go on to slide 19, it turns out there's a subtle difference. And a point we want to make in this course is that dopants also have what's called an electrical solubility that is different from the solid solubility that we defined according to precipitation. The electrical solubility refers to the maximum doping concentration in terms of electron density per cubic centimeter that you can achieve with that dopant. And that generally varies with the temperature, as we see here. This is a plot for the maximum electron concentration doping you can achieve using arsenic in silicon as a function of temperature. And there's a lot of different points on this curve from different measurements in the literature. But they generally follow this roughly this straight line. And so, again, what this is saying is that at 1,000 degrees, if you look at the curve, you can get something like 3 to 4 times 10 to the 20 electrons per cubic centimeter by introducing arsenic into the lattice. If you introduce more arsenic than that, it may still be below the solid solubility, but you won't get any more electrons. It's not electrically active. It may not precipitate until you get up into the 10 to 21 range. But so there's this intermediate range, say, at 1,000 degrees between 310 to the 20 and 10 to the 21 in which you may not see silicon-arsenic precipitates, but you do not have any higher electron concentration. That turned out to be a bit of a mystery to people for a number of years. They couldn't see precipitates, but they knew they wouldn't weren't getting the electron concentration above a certain number. And it was subject to a lot of discussions in the literature. In fact, if you go on to slide number 20-- I took this from your text. People eventually came up with a number of models to try to explain how it is you can get more arsenic in the lattice if it does not precipitate. And yet it doesn't contribute electrons to the doping in this range. And here on the left, I'm imagining arsenic in the lattice-- the pink atom in the silicon lattice-- a relatively lightly doped sample, say, in the 10 to the 20 range or so-- heavily doped, but not too high. And what you see is arsenic generally surrounded by four silicon atoms. And it donates its extra fifth electron that is not conveniently bonded to the conduction band, and you can get a free electron. Now, on the right side is shown a hypothetical case where you have, say, a lot of arsenic on the lattice. So you might have 8 times 10 to the 20 or 10 to the 21 or something in which people still didn't see a second phase. They didn't see precipitation happening. What you might have, say, four arsenic atoms near each other-- these four arsenic atoms surrounding, say, a vacancy-- a silicon vacancy. So this AS 4 and V is a complex that people have hypothesized which would enable you to introduce four arsenic atoms in the vicinity of the vacancy. And yet no electrons will be donated-- no free electrons. So these four arsenic atoms would be essentially electrically inactive, and yet essentially on substitutional lattice sites at high arsenic concentration. So this might be one way of accounting for the fact that the electrical solubility is a little lower than the solid solubility. So, again, we just point this out for a couple of reasons, mainly because it points to the fact that as we increase doping, we don't always get an increase in the electron concentration, and therefore a decrease in resistance. OK, so let's go on to slide number 21. And we're going to consider macroscopic first-- macroscopic models for diffusion. Later on, we'll talk about the more atomistic diffusion mechanisms and effects. And hopefully, you've read part of chapter 7, and you know that macroscopic dopant diffusion is described by Fick's first law, which describes how the flux or the flow of dopant depends upon the doping gradient. And I'm showing here a schematic sketch of concentration as a function of distance. And one of these curves-- the one with a higher peak concentration-- occurs at some time called t1, an earlier time. And then the second curve that has moved out a further distance corresponds to time t2. And essentially, what we see is that the flux F-- moving in this case to the right-- is equal to minus a constant called the diffusion constant times the gradient dC by dx. So as you take the slope along that curve, wherever the slope is very steep you get a large flux, and therefore a large movement in the profile. As you get near the top of the profile, the concentration gradient is getting small, and then the flux is a little bit lower. But at the flux at any given point can be given by this equation. When the concentration gradient goes to 0, essentially, the dopant or the atoms are uniformly distributed, say, in the solid, and the flow would stop according to Fick's first law. So let's go on to slide number 22. Again this F equals minus D the partial C by dx. This is a general sort of flux law which hopefully may be familiar to you. It's similar to Fourier's law or analogous to Fourier's law of heat conduction or to Ohm's law for current flow. In this case, the proportionality constant is called the diffusivity D. It has units of length squared per time or centimeter squared per second. And as it turns out, we'll see that D is related to the atomic hop rate or the jump frequency over some kind of energy barrier. And this energy barrier is associated with the formation of migration energies of mobile species. D is generally exponentially activated. So it's dependent on temperature in exponential fashion. And in the silicon lattice, by symmetry D is isotropic. So it doesn't depend on which direction you're diffusing. And of course, the negative sign in this Fick's first law indicates that the flow is down the concentration gradient. So just by drawing yourself a profile, you see that dC/dx is negative. So in order to get flux to the right, you need to have the negative sign Let's go on to slide number 23. This illustrates a derivation of Fick's second law, which describes how the change in concentration in a small volume element is determined by the change in the fluxes into and out of that volume. So if you take a look at this volume element, which has a certain length to it-- this delta x. And there's a certain concentration change in a certain time period, delta t, in that volume element, which we'll call in delta C. There's a flux in coming from the left phase, and there's a flux out going out the right phase. And just by a bookkeeping on the upper left, we can write a simple equation that says the change in the concentration delta C in the time period delta t is just equal to the difference in these two fluxes-- the incoming flux minus the outgoing flux-- divided by the distance, delta x. Mathematically, instead of writing it in terms of these differences, we can write it mathematically as partial derivatives. We can write delta C by delta t as partial C by partial t. So the time rate of change of the concentration is equal to the partial of the flux divided by x. So it's the gradient of the flux. That's what we're saying with Fick's second law. And now for the flux F we can substitute in Fick's first law. It's just F is equal to D partial C by partial x. So we can substitute that in, and we get this equation in the middle of the slide, which essentially is the differential form of Fick's second law. Now, what we do is we make the assumption at this point that the diffusivity is constant. So it doesn't depend on x. In that case, and only in that case, then we can pull the D out of the derivative. The partial by dx, if we do the chain rule, is 0. And then we can get the equation at the bottom of the page, which says that the time rate of change of the concentration at any given point is a constant D times the second derivative of the concentration with respect to x. And that is Fick's second law. And again, this particular formulation only applies if the diffusion diffusivity is a constant. It doesn't depend on x. So let's go on to slide 24 now. And there are a handful of cases, maybe three or so or four, in which it's possible to write down or to derive relatively simple analytic solutions to the diffusion equation. In all the other cases and most of the cases we'll end up using in this course, we'll have to do numerical solutions. And we'll talk next time about how numerical solutions work. But for now, let's look at a couple of special cases where we can solve this equation by hand. The first case is pictured on slide 24, which is called the steady state. And what that refers to is that, in fact, we have a profile that is not varying in time, in which case we write partial C partial t equals 0. And so then we have a relatively simple equation that D is equal to the second derivative of C with respect to x equals 0. D times that is equal to 0. So we can just simply integrate this equation twice, and we end up with a linear profile over distance. So if you just take the equation that the concentration C equals a plus bx, and we differentiate that twice, indeed we get 0. So this says that in a steady state solution to the diffusion equation is a linear equation. In fact, when we did the solution of the diffusion of the oxidant through the oxide during thermal oxidation, this is the equation that we actually use. This is implicitly the solution that we were assuming. And I think you'll recognize-- from the last few lectures, you'll recognize on the lower pictures this steady state solution is either in the left side-- being in the thin oxide regime, we get a straight line that's just a constant number of the diffusion through the oxide. Or on the right hand side, we get, again, a linear profile of the oxygen through the oxide during thermal oxidation. And again, in that case it's a concentration profile that's not changing with time. So let's go on to slide number 25 and do the first solution of a case that's a little more complicated than that, which is called the limited source case. So we consider that we have the dopant in this region, and it has a fixed dose Q, so a fixed number of atoms per square centimeter. And we're going to introduce it as a delta function at the origin. And then we're going to let it diffuse and diffuse out. And as it turns out, the C-- if diffusivity is a constant, it diffuses into the shape of a Gaussian profile. And so, basically, the boundary conditions are that we have essentially a delta function at time t equals 0 and that the dose-- the integral of this delta function-- is a constant. Let's go on to slide number 26. And we find that the solution that satisfies Fick's second law is written down by this equation. And in fact, it's a Gaussian distribution. The concentration is a function of x and time can be given by that constant dose Q divided by 2 times the square root of pi Dt times the exponential of minus x squared over 4 Dt. So that's what's known as a Gaussian profile. And the important consequence of this are that one, of course, the dose Q remains constant. That means then that the peak concentration-- so the concentration at the origin-- is going to decrease according to the square root of Dt over time. So the peak concentration goes down. And the width of the profile or the diffusion distance from the origin is going to increase according to 2 times the square root of Dt. You see that just by looking at the argument of the exponential there. So at this distance-- this distance equal to 2 times the square root of Dt-- the doping concentration will fall off by 1 over e. And in fact, what we do is we often call this distance-- we give it a special name. We call the diffusion length L. We typically write it as twice a square root Dt or sometimes just the square root of Dt. It gives us an idea of the width or the broadening of the profile. So let's go on to slide number 27 which shows pictorially the time evolution of a Gaussian profile. The left hand plot is on linear y-axis and the right hand plot is on a logarithmic scale-- give you a little better view. So, first, let's look at the left. And we there are three curves shown here. The red is for time t equals some time t 0. And the y-axis is plotting the concentration in a normalized fashion. And the x-axis is in units of diffusion distance. So its units are units of 2 times square root of Dt0. And you can see in going from the red to the blue curve, indeed the concentration has dropped by 1 over the square root of t because, basically, the blue curve is 4 times t0. So it's dropped by a factor of 2. And it's broadened. And same thing by looking at the green curve. You see the same type of phenomenon. And on the right on the semi-log plot, you just get to see a little bit more detail. You get your eye calibrated for what a Gaussian looks like on a logarithmic scale. So you can see many orders of magnitude down below the peak what the broadening actually looks like. Because in semiconductor processing, linear scales for dopants are not all that useful because, in fact, we often care about how the dopant falls off over many, many orders of magnitude of concentration. So we typically like to use a semi-log type plot. OK, so we have one solution for one case. Let's go on to slide number 28 and talk about the second case, which is a fixed dose Q, just like we talk about, constant in time. But now we're diffusing near a surface. Before we have the origin, and we assume the silicon was semi-infinite in both directions. Let's say we had this delta function of dose Q as the initial profile. But we're right near the surface of the silicon wafer. Well, there's a relatively simple trick for solving this. If we can assume that there's no dopant loss through evaporation or segregation at the surface-- that the dopant is contained in the surface-- if there's evaporation, all bets are off, and we have to solve it differently. So we assume that there's no segregation or evaporation. And we also assume that the annealing takes place over a long time so that the initial profile is reasonably can be reasonably approximated by a delta function compared to the final profile. If those two assumptions are hold, then we can essentially solve it by assuming that we have virtual diffusion-- that we have a symmetric diffusion with an imaginary delta function of equal dose Q on the left-hand side. So we can solve it by using, essentially, the prior solution, pretending that the medium is semi-infinite. So, in fact, if we go on to slide number 27, that same the graph is shown at the top. Effectively, what this means is that we have a dose of 2Q introduced into a virtual infinite medium by symmetry so that the concentration profile is given, again, just by the same Gaussian we had last time. But instead in that formula wherever we had Q, we replace it with 2Q from the prior solution. So the surface concentration now goes Q over the square root of pi Dt. But it's very similar to what we had last time. And so this is what the equation looks like for diffusing with a fixed dose into a surface where we have no loss from the surface. Again, it's a Gaussian profile. So let's go on to slide number 30. And the third case, essentially, that we can solve analytically is called the case of an infinite source. And what this is essentially an infinite source of dopant which is made up of small slices, essentially each diffusing as a Gaussian. So what we look at in this plot looking at the black line-- we have a concentration C being constant everywhere x is less than the origin. And then there's a step function at the origin. And then it's 0 everywhere greater than the origin. So we have the step function at x equals 0. And that's the initial profile in black. And what we're going to find is that the diffuse profile looks like the red. The step function gets rounded to the left of 0. And some of that dopant then has diffused into the right-hand side at x greater than 0. And it gives us that diffused profile. So how are we going to solve for this? Well, actually, we do it by using the solution we had obtained previously and essentially by a linear superposition of solutions for each of these thin slices. So we break up this infinite source on the left-hand half plane into a series of very, very small thin slices, each of which has a certain dose. And its dose, by definition, is just the concentration C times delta x-- the width of that little thin slice. And in fact, after some time t, we know how to write down for that little slice what the profile looks like. In fact, it's a Gaussian. So, now, if each slice then of all these slices can be added up, their Gaussians associated with their diffusion, we can then find the diffused profile. And that's exactly what we're doing here. So that equation at the bottom of slide 30 shows that the concentration in this infinite source case can be given by the sum of all those Gaussians. So we go for the sum from i equals 1 to n, where n is some large number, of this delta xi comes from the dose-- remember, there was a Q front of the Gaussian-- times this exponential of x minus xi because we're sliding this the position of this particular thin slice along the x-axis. That exponential squared over 4 Dt. So we're summing up all these Gaussians at the bottom of slide 30. So, in fact, analytically, or in an exact sense, the solution which satisfies Fick's second law is written down at the top of slide 31. The concentration is actually equal to concentration C prime over 2 times the quantity in square brackets 1 minus the error function of the argument x over 2 times the square root of Dt. And we can write this as C sub s times the complementary error function of x over 2 times the square root of Dt, where the second equation and third gives you the definition of what we mean by the error function. Error function of z is just equal to 2 over the square root of pi times the integral from 0 to z of this exponential-- it's integral of a Gaussian, basically. So the error function is the integral of the Gaussian. The complementary error function is 1 minus that. And then these error functions and complementary error functions have been tabulated. So in that sense, it can be calculated analytically. So we know that the complementary error function is what the shape of this profile can be calculated according to. So let's look at slide number 32, which, again, the error function solutions are made up of a sum of Gaussian delta function solutions. And what you see is that here in this plot, the initial profile is shown in the dashed line in green and that you have the subsequent profiles are time t equals t0 in black, 4t0, in blue, and 9t0 in red. And that the dose beyond x equals 0 continues to increase with annealing time in this infinite source sort of solution. So let's go on to slide 33. We can take this as another special case, in fact, by just looking at the plot in the upper part of the slide 33. What we see is that at x equals 0, the concentration is actually held fixed. So if we have a situation where we have a constant surface concentration, then, in fact, the solution to the diffusion equation is just the right-hand side of the above figure. An example of this might be the case where we're doing diffusion from a gas ambient into the solid where the gas concentration above the solid solubility of the dopant. Then, in that case, at the surface of the silicon wafer the concentration of the dopant is fixed at the solid solubility. So it's constant. So if we take just that right-hand solution in the above figure, we can write it down from the previous solution. Concentration is just C sub s, which is the surface concentration, which is a constant, times the complementary error function. And in fact, you can integrate this equation to find the dose on the right-hand side. And the dose is given by this integral, which can be done integrating from 0 to infinity. And you can do that. And it turns out that dose Q is equal to 2 C sub s over the square root of pi times the square root of Dt. So, again, now we see that this dose or the number of the integral of these curves on the right-hand side is increasing with time according to the square root of Dt. So we're getting a higher and higher dose into the sample. So let's go on to slide number 34. And here we are graphically comparing on the left and the right-hand side the two different types of classical processes that we talked about in terms of their diffusion profile shapes. On the left is the predeposition case where we have, say, a constant surface concentration, assuming the pre-dep was being done by a gas phase in diffusion. And there are two plots shown on the left. In the upper left is a plot on a linear scale. The lower left is a plot on a semi-log plot. But in either case, what you're looking at is a complementary error function. And you can see that the surface concentration is a constant normalized at C over C s equals 1. And that at the different times, you can see the twice square root of Dt is 0.1, 0.5, and 1 micron. The area under these curves is increasing. And fact, it's increasing by this factor square root of Dt. So the longer you would do the pre-dep, the more dose you would deposit into this silicon surface. On the right hand side, we see instead their Gaussian profiles . This would be the case for a drive-in, which has a constant dose. And so you see what's happening over time at the shorter time. We have a certain peak concentration. That peak concentration is then falling or dropping for the second profile. And the profile is broadening, and then it falls again, and the profile broadens further. So that's for Q equals a constant, integral is constant, and the left-hand side is for the surface concentration is a constant. That's just to get your eyes calibrated for complementary error function versus a Gaussian type of solution. Let's go on to slide number 35 and talk a little bit about dopant diffusion coefficients themselves. And we're just going to talk first about what we call intrinsic dopant diffusion, which happens in the case when the dopant concentration is less than n sub i. So the semiconductor is considered to be intrinsic. And generally, we can write these intrinsic diffusion coefficients in an Arrhenius-type relationship for the diffusion coefficient is just some constant D 0 written here times exponential minus EA over kT. And this chart, which is taken from your textbook, shows some rough numbers for what D 0 looks like in units of centimeters squared per second and what the activation energy looks like for a couple of different species and some of the pot that are dopants in silicon. So, for example, if you look at boron, it has an activation energy here of 3.5 electron volts. And the prefactor is about 1. Thing to note is that n sub i-- the intrinsic carrier concentration-- is very large at process temperatures. So, actually, intrinsic diffusion conditions apply under many different conditions or cases. So, for example, at 1,000 degrees C, n sub i is roughly equal to about 7 times 10 to the 18th. So this diffusion coefficient written here in this simple chart would apply anytime the concentration of that dopant at 1,000 degrees is less than about 7 10 to the 18th. You can use this constant diffusion coefficient. When we get above that-- when the doping concentration is larger than n sub i-- we'll talk about next time there's some interesting Fermi level effects that come into play as the point defect concentrations become modulated by the carrier concentrations themselves of the diffusion species. So we'd go on to slide number 36. This is just a graphical representation. It's a plot of the diffusivity in centimeters squared per second as a function of 1,000 over T. So it's Arrhenius-type plot. In the upper y-axis, you can read the temperature if you prefer that. And what you can see for these dopants right off the bat looking at them is that there's a pretty large difference or discrepancy between so-called fast diffusers in silicon and the slower ones-- say, the fast diffusers being boron and phosphorus among the common dopants. Boron, again, the only really available p-type dopant, is relatively fast. It can be up to a factor of 10 or 20 or 30 faster diffusion coefficient than the slower diffusers such as arsenic or antimony. So this gives you a rough idea of, when we talk about doping diffusion, what we're going to have to worry about a little bit more would be the fast diffusers-- say, boron. The other thing I want to point out with respect to this plot on slide number 36 is that earlier versions of the text had an error in this corresponding figure. And so on the website, we had posted the errata. And you'll be able to see that to make sure that you're using, if you're reading curves off the plot-- reading values off the plot-- that you have the right values. One way to check that, of course, is to back to slide 35 and actually compute directly with a calculator the diffusion coefficients. So let's go on to slide 37. And I'd like to summarize this introduction to dopant diffusion. We've talked about that the placement of dope regions is critical because it determines many of the characteristics of short-channel MOSFETs. That's why we spent so much time calculating in great detail dopant diffusion, as we'll do over the next three or four lectures. Turns out there's a design tradeoff between the series resistance of a MOSFET, which means you would like to have deeper source-drains to minimize the series resistance and the short-channel effects, such as the control of Vt, would dictate that you have a shallower source-drain. So this is a fundamental tradeoff. And therefore, the channel doping profile engineering is a way of compromising that design tradeoff. And so channel doping profiles need to be controlled very accurately. Beyond that motivation, we talked about some simple analytic solutions that the time evolution of a doping profile, if the case is simple, is governed by a fixed loss-- the so-called diffusion equation. There are a couple of cases where there are analytic solutions. We talked about the diffusion of a Gaussian profile with a fixed dose or diffusion of a complementary error function, which we apply for a constant surface concentration. And finally-- |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 6_Oxidation_and_the_SiSiO2_Interface_DealGrove_Model_Thin_Oxide_Models.txt | JUDY HOYT: Doing right now is hopefully, you're all reading chapter 6, which covers the process of thermal oxidation. Last time, we talked about general introduction to thermal oxides in silicon CMOS. And we also talked about the metal oxide silicon or semiconductor capacitor and how electrical simple CD measurements can be used to understand the quality of the oxide and the charges that exist in that structure. This time, we're going to cover a basic model called the Deal-Grove. This is after the two people, that's Bruce Deal and Andy Grove, who came up with this linear parabolic oxidation model. It really remains the basic model used today. And then we'll talk about examples on how to use it. And we'll finish up with other models of planar oxidation that are very useful in the very thin oxide regime. So I'm just curious, how many people here have known or not known but have heard about the Deal-Grove oxidation model? So about maybe a little less than half or so. OK, so that works reasonably well. We go through it because it's the basic model. And then everything after that, a lot of things after that tend to be corrections to the Deal-Grove model. So let's go on to slide number 2. And that shows the basic schematic setup for how Deal and Grove derived their model. What it shows you is turned sideways is you have a gas stream here on the left that's on top of the wafer. There is a thermal oxide being grown here in this white region, the silicon dioxide. And the pink on the right shows this meant to represent the silicon wafer. And just to give you an order of magnitude scale. The silicon wafer is thick. Of course, it's hundreds of microns. This oxide that we're talking about growing is on the order of 0.01, 0.01 micron being about 100 angstroms or 10 nanometers up to maybe a micron. That would be the range of reasonable thicknesses of which you would grow. Once you start growing more than a micron, as you'll see today, if you want to grow one or two or three microns by a thermal process, it gets pretty darn slow. The kinetics are such that it becomes really not very practical. So you don't see too many people growing oxides that much thicker than a couple of microns. If they need oxides that thick, they do a different process called chemical vapor deposition. So the basic idea of this model, though, is that we have a gas stream here. And there's a concentration C sub G of the oxidant. Let's say it's oxygen. Could be oxygen, could be water vapor. But there's some concentration in the main gas flow , in the main gas stream. As you get closer to the surface of the wafer, which has oxide on it, it has some concentration C sub S that's right at the surface of the wafer. There may be some boundary layer or whatever that causes a gradient to cross here. But right at the surface of the wafer, it's at C sub S in the gap but in the gas phase. In the solid, right inside at the very surface in the solid oxide, this oxidant has some concentration C sub O right at the surface, or C 0 if you would like. And then there is some gradient of the oxidant, say of the oxygen molecule. There's a gradient throughout the oxide. And at the interface, it has C sub I, the I standing for the interface. It has that concentration. And the basic chemical reactions that are happening now at the interface, not at the oxide surface, are shown here. Silicon plus oxygen going to SiO2 or silicon plus moisture, H2O, going to SiO2. And there in the equations that we're going to derive, the simple model, there are three fluxes involved. There's a flux F1 of oxidant that has to be transported in the gas phase from the major portion of the gas phase in the center of the tube to the surface of the wafer. That's one flux. There's a flux F2 in the oxide. This is the diffusion of the oxidant through the oxide. And there's finally a flux F3, which is what's happening right at the interface as the oxygen is incorporated into the growing oxide. So let's go on to slide 3. Again, on the upper diagram is exactly what I've just shown you. For those of you who have forgotten, on the upper right, I'm reminding you what the definition of flux is. What is a flux? It's the number of particles. Doesn't have to be, in this case, I'm talking about oxygen molecules or water vapor molecules. But it's the number of particles crossing a unit area-- so a unit area in this direction here-- per unit time. So there's a flux, in this case, going from the left to the right. In fact, there's three different of them. So what Deal and Grove did was write down three very simple equations, first order flux equations, which show these three series parts of the process. Again, they're all happening in series. So there's a flux, F1. They wrote down using Henry's law. So Henry's law just says, in the gas phase transport, if you have a concentration difference or a gradient here from this point here in the center of where the gas flow is to the surface, C sub S, that you can write the flux as just some constant, some Henry's constant, a constant number times the difference in the concentration between the surface and the where the concentration is CG. So that's a simple equation from Henry's law. A second equation they wrote down was diffusion through the oxide. And this comes from Fick's law, Fick's first law. Hopefully, some of you have seen this, maybe not in this particular form. When we talk about diffusion, it'll become very obvious. But this is Fick's first law for diffusion through the oxide. The flux F2 is just a constant number, which we call the diffusivity, D, times the gradient of the diffusion, DN by DX, where n is generally just the concentration of whatever is diffusing. And you can write that. If this is a straight line, writing the gradient of a straight line is very simple. It's just C at the surface, C 0 minus C interface divided by the thickness of the oxide. So it's a simple equation to write diffusion through the oxide. And finally, F3 is the flux at the interface. And this is just a surface reaction rate. So we write this final flux as K sub S, a constant, which has to do with how rapidly the reaction is taking place at the interface, times the concentration of oxygen or water vapor right at the interface. So three equations for each of these fluxes. So we go on to slide 3. And what they're assuming is steady-state condition. And what we mean by steady state is things are not changing with time, these profiles. So if I just go back one second to slide 3, is that this profile is, we're not in the transient phase. This profile has now been established as a linear profile in a steady state. It's not changing with time. You've established a constant profile. So in those conditions, then, we can write down that the three fluxes in series are all equal to each other. We don't have any buildup or any sync. So F1, the flux throwing through the gas, has to be equal to F2, the flux diffusing through has to be equal to the reactive flux. So in that case, then, it's relatively simple. If you sit down and you equate those three equations, you can derive this formula here, equation number 4, for the concentration of the oxygen molecules of the water vapor C sub I at the interface. You can write it as some constant, C star, which we'll talk about, where C star is the solubility of the oxidant in the oxide divided by a three-term expression, 1 plus the ratio of K sub S to H. So that tells us something about the ratio of the surface reaction coefficient to Henry's coefficient H, plus k sub S X node over D. This is involving a diffusivity term. And so what we are generally going to do in this, we can actually simplify this equation by recognizing that we actually can neglect F1, the gas phase transport, which is usually a very good approximation. It's very fast. So if I go back to this slide number 3 briefly, this flux, this is happening much, much faster than these two processes, than the diffusion through the solid or then the reaction, as it happens to turn out in this case. That won't always be the case for other processes. When we talk about epitaxial growth or chemical vapor deposition, it won't always be this case that you can ignore the gas phase transport. But as it turns out, the surface reaction is slow enough, and the diffusion through the oxide is slow enough that this is always very fast. And so the beautiful thing about oxidation is you can ignore gas phase transport. And that's why you can design oxidation tubes to be very simple. You just stand all the wafers up and flow the oxygen through. And you don't have to care about exactly how the gas flows. When we talk about other processes like epitaxial growth, no such luxury by any means. In that case, this flux is extremely important. And in fact, it controls the process many times. So but we're very lucky in oxide growth that these two processes F2 and F3 are slow. So we can ignore F1. And that just ends up simplifying the equations to a certain extent. And so if I rearrange this a little bit, we get equation number 5 here for the-- we're really interested in C sub O here, the concentration at the surface of the oxidant. So let me combine now this equation number 4, which tells me what the concentration of water or oxygen is at the interface with-- I'm going to combine this equation with equation 3, just going back again, which just tells me that the surface reaction rate flux. And we get something like this that says that DX by DT-- so that's the rate of change of the thickness of the oxide-- DX by DT, we know we can write it as a flux divided by a concentration. So N1 here, does what anybody have an idea of what N1 is, just from dimensional arguments? I want to get DX by DT. Flux tells you the number of atoms per square centimeter per unit time. So N1 has to have the units of the concentration per cubic centimeter, basically, so that it works out, even just to work out dimensionally. Of course, I don't have any chalk. But that's what happens. But if you work it out dimensionally, flux has a number of atoms per unit time per square centimeter. This is the number of atoms in the oxide. Per cubic centimeter, you end up with something like looks like centimeters per unit time, or length per unit time. So this is how quickly the oxide is growing, the growth rate or the velocity, which is what we're interested in, the growth rate. Can be written like this. It's the product of K sub S, the surface reaction rate, times the C star. C star is the solubility of the oxidant in the oxide. So that's as much as you can put in at that temperature, OK? You might have more in the gas phase than that. But that's how much will go in at that temperature. C star divided by this denominator, 1 plus K sub over H plus K sub S X naught over D. So that's telling us the rate of growth of the oxide. Now, we have it, in terms of certain fundamental physical parameters. So let's go on to slide 5. That was a simple, if you want, differential equation, DX by DT, just says how rapidly it's-- what is the velocity of the growth. And we integrate that. If you look in the text, it has a little bit of more detail on how you do the integration. But you integrate it between 0 and some time and between some initial thickness X sub I. And your final thickness or the thickness at any point in time, X0. So here, X0 is the thickness of the oxide right at the time, T. And XI is whatever thickness you started. With could be 0, whatever you thickness you started with at the start of the oxidation process. So by doing that integration, you end up with that time, if this can solve for time, is a two-term expression, something that involves X squared, thickness squared, and something that involves a linear term, thickness to the first power. So this is the origin. The origin of this parabolic/linear model. Here's a parabolic term. It goes like the thickness squared. Here's a linear term. And the multipliers are B. Here, this is the multiplier of the X0 the parabolic term. Looks like this. It's it looks like the diffusion coefficient through the oxide times the solubility in the oxide, C star, divided by N1. So the parabolic process has to do with diffusion through the oxide. The linear term, the term that multiplies X0, can be simplified here to something that looks like C star K sub over N1. And again, I'm ignoring this 1 over H because this is going very rapidly. So H is very large compared to K. So I can ignore this term. So this is the linear rate constant, which tells you something about the surface reaction rate. So linear rate constant has to do with the reaction. Parabolic has to do with the diffusion. Notice that they both depend, however, on C star, which is the solubility of either the oxygen or the water in the oxide. So let's go on to slide number 6. These rate constants, B and B over A, have some physical meaning. We just talked about that, diffusion of the oxidant and interface reaction. And we find experimentally that we can write them as Arrhenius relationships. They're exponentially dependent on E to some activation energy, E to the minus E1 over KT or E to the minus E2 over KT times a prefactor. The prefactor here is C1. For B and B over A is C2. And these rate constants have been studied for many years. People have measured them or tried to extract them experimentally. And this is a chart, a table, I took directly out of your text, which shows the ambient. So that's the type of gases environment that you are in, the wafer is in, and the rate constants, B and B over A, and how to calculate them. So in dry oxygen, so you're flowing O2 in the tube. But there's no moisture around. To calculate the B rate constant, you need to know C1. And you need to know E1, the activation energies, wet O2 and H2O and moisture environment. So, by the way, just be careful when you're going to use these numbers. These are for 111 silicon oxidation. So that's the oxidation of a wafer with orientation surface of 111 at one atmospheric pressure. If you want to know the values for 100 silicon, then you take the C2 values, these B over A, the rate constants, and you divide by 1.68. So it's slower for a 100 interface. And if you go back and look at your crystallography and look at the number of atoms per square centimeter, it actually makes some sense because on a 111 interface, we have the largest number of atoms per square centimeter available for the oxidation. So it's going to be the rate constants are going to be faster. But notice, the activation energy doesn't change as you change orientations. It's all for the rate constant B over A, the surface reaction rate, it all has an activation energy of close to 2EV. So the activation energy for B over A is roughly constant. It just depends on the surface reaction rate. For B, interestingly, the activation energy for B varies with the process. So depending on whether you're in a dry O2 here, you get 1.23 EV. Here, this is 0.7 EV. And for moisture and for water, it's 0.78 EV. So it actually varies with the process. And that makes some sense because what did we say B depends on? It depends on the diffusion of this species through the oxide. Well, a water molecule, if that's what's diffusing the oxide, is going to diffuse at a different rate or maybe have a different solubility than just oxygen molecule. So we do see a difference in the activation energy for B. And on slide 7, obviously, you can do this yourself. You just sit down with those equations on a semi-log plot. So this is a log scale on the y-axis. And this is 1 over T or a 1,000 over T. This is a standard Arrhenius plot you've had to make. So on a semi-log plot, any kind of exponential relationship will appear as a straight line because it's log paper. And these are the different constants. Here we are for water is the B constant, the B over A, rather. And this is the B constant. And these are for 111. So you just literally plot what values are on the table. One thing you see right off the bat, just from this plot, these two lines on top that are faster, that are larger, are for oxidation in water vapor as opposed to dry. So right off the bat, you know the rate constants are higher for water than dry. So if you want to grow faster in oxide, you would grow it in a moisture environment, not in dry O2. And that's typically the case. People grow thick oxides using a wet environment, thin oxides using a dry environment. OK, let's go just go on to slide number 8 and just reformulate this. You can do some mathematical manipulations on the law, the linear parabolic law, as I showed it earlier. I've just repeated that equation, where on the right-hand side, we've solved for T. On the left-hand side is the oxide thickness. There are alternate ways to express this law. And in fact, here's a reformulation of it shown in the second equation on this slide. What you see is what we've done is now, we've expressing it as X0 squared over B plus X0 divided by B over A is equal to T plus tau, where what we've done is tau now is defined as the time that it would have taken, the effect of time, to grow the initial oxide thickness X sub I. So this is just obtained by a mathematical manipulation of the first equation. So if you set tau equal to XI, the initial squared, plus A XI over B, just mathematically, you do that, then you get a mathematical equivalence of the second equation to the first equation. Sometimes, you also want to explicitly have an equation that shows you what X0 is, the oxide thickness. That's not what's shown here. On the upper two equations, X0 is implicit because it appears here in two different terms. If you want to solve explicitly for the oxide thickness, you can see this middle equation, X0, it looks like this. It's got a square root dependence on time. And where, again, tau has the same definition, XI squared, it's just that time to grow the initial oxide, the effect of time in the Deal-Grove model. OK, let's go on to slide number 9. And again, this reformulation, I'm just repeating the equations from the previous slide. So you don't have to remember them. Again, XI and tau account for any oxide that was present at the start of the process. When you first push the wafer in, you might have had other processes that have taken place. And they might already have some oxide present there. So we can take this-- we can take this equation and differentiate it. And let's say we're interested in the oxidation rate, not the absolute thickness versus time. But how is the rate changing? How fast is it growing? Well, this is the oxidation rate. By differentiating, you can see, it's given by B divided by this quantity, 2X0 plus A. So this gives you an idea of the rate, how rapidly is it growing? So these equations are very general. And what they describe as the growth kinetics for oxidation. When you're in these following conditions, we have to remind ourselves that it only applies for certain basic case. So this means when it's planar oxidation, so I'm just oxidizing a flat surface, we'll talk next time, when you have a surface that has structure on it, the equations need to be modified. It's planar and it's unpatterned. So this is just an unpatterned surface. This assumes that the silicon is lightly doped, that you can use these numbers. Heavily doped oxidation, we'll talk about next time. A simple ambient such as oxygen, dry O2 or water, is assumed. If you're going to add other gases, nitrous oxide and other things that people do to grow high K, these particular equations don't apply exactly. And these equations work when the final oxide thickness is larger than about 20 nanometers, about 200 angstroms. They work quite well. But so that's a certain number of limitations, still very useful because what we're going to do from here is now make corrections. Right off the bat though, when you see this 20 nanometer number, what do you think of? I mean, gate oxides today, just from your reading of the ITRS and high performance devices, what are they? Are they less than or greater than 20 nanometers? Less than, significantly less than for most high performance devices. So this tells us off the bat, if we want to use Deal-Grove to model a standard gate oxide thickness that someone might be growing 100 angstroms or less, it's not going to be perfect. We're going to have to make corrections to this. And we'll talk today about those thin oxidation models. But let's go on to slide 10. And again, just to make a different way of looking at these same equations, I've repeated them again at the top so we don't have to memorize them. But what we can do now is to take two limiting cases. One limiting case I'm going to say is when the oxide is relatively thin. So X0 is relatively small. But it turns out this term here, the linear term X0 divided by B over A is going to dominate over this X0 squared. This is going to become a smaller term. So we can ignore this first term. And we end up that X0 is proportional to B over A times time. So it's essentially, in thin oxide, we have the equation is linear in time. So it's just we have a constant oxidation rate. And the thickness just increases linearly, according to this Deal-Grove model. For thicker oxides-- and we'll define later what we mean by thicker-- but at some point, this term, this X0 squared over B, starts to dominate over the linear term. We can ignore this linear term. For long times, where very thick oxides, it goes like X0 squared is a constant times T. So it's basically a parabolic term. So the oxide thickness starts increasing like the square root of time. So initially, it's going up linearly. And then when it becomes thick enough, it starts increasing more like a square root kind of function. And at that point, it's thick enough that diffusion through the oxide is really starting to limit you. Mathematically, if you prefer to think mathematically, you're going to make yourself a plot. This is actually a log scale, a log plot, log on the Y, log on the X. And it's kind of normalized. What is a plot of is X0 divided by A over 2. So we've sort of normalized the Y-AXIS and T plus tau divided by this constant, A squared over 4B to make it in unitless. And what you see, this solid line here is equal to a plot of X0 squared plus AX0 equals B times tau time T plus tau. So that's the exact Deal-Grove model. At short time, you can see it approaches or low thicknesses it approaches this linear, rate constant, this linear equation. Remember, on a semi-log plot, this has a certain slope. And then for longer times, much higher thicknesses, it approaches a straight line, this dashed line that has half the slope. Again, linear here versus square root has half the slope. Different way of looking at it-- I actually like you to think a little more physically, not just look at the equations and say, which term is big and which is small? In fact, it's much easier to remember the limiting cases in a physical sense. And that's what's being shown on slide 11. So if you look at the case of a thin oxide, it's relatively thin, what's happening is physically, is diffusion. This is so thin, the diffusion process happens very rapidly compared to the rate at which those atoms can react at the surface. So when it's thin, the surface reaction is what limits. It's the slowest step. Any of these serial processes always end up being limited by the rate limiting step is the slowest step. So clearly, when you're thin, getting through the oxide takes no time at all. What's limiting you is the rate at which you can react. And we know that is a linear process. It just depends on K sub S. Now let's say when you're thick, if you have a thick oxide, well, then the diffusion process is what's rate limiting you. The reaction at the surface can be relatively fast compared to the diffusion rate. So diffusion is rate limited. And that process ends up depending-- has a square root type dependence on time. So in both cases though, and just for these little diagrams I drew, C star, remember, is the solubility of the oxidant and the oxide. So in the thin case, you diffuse very rapidly through. You have almost no gradient in the oxide. You're getting through very rapidly just by virtue of it so thin. In a thick case, C star is a solubility. That's what you have at the surface. At the interface, it's almost going to 0 because again, you're rate limited by the rate of diffusion. You have a large gradient. So this flux through the oxide is being driven by this gradient. So those are the two limiting cases. One depends on diffusion. The other is limited by surface reaction. Slide 12 is just a little example of if you're an experimentalist, how do you sit down? And how do people in the old days used to do this? Well, it's pretty simple, experimentally. You just oxidize a bunch of wafers at the same temperature for different times. And you plot the oxide thickness measured versus time. You'll see this linear dependence at first. And then it'll start to bend over where it's square root. Easy way to do it is to plot X instead off X0 instead as a function of time, but T time divided by X0. And you'll find out that the slope, if you just manipulate those simple linear equations, the slope is the parameter B. And the y-intercept is the parameter minus A. So for the Deal-Grove model, it's very simple experimentally if you do this kind of plot to extract those two parameters. These are some actual calculations of the oxidation rates in dry on slide 13 and dry oxygen using the Deal-Grove model. This is for 100 silicon now. So 100 silicon is much more likely what you're going to be using in a silicon manufacturing process or probably in your research. So it's a plot on the left axis linear scale oxide thickness as a function of time. And what you do see is a linear regime here, say, when you're at thin oxide, say below 0.1 microns or maybe half of that, 500 angstroms, the lines look quite linear. And then they start bending over. So they're followed by this sort of parabolic behavior where they grow less rapidly. And you can also see, look at the thicknesses, as we mentioned before. Dry oxides, even at high temperatures of 1,100, which is quite high for a furnace. You're really limited. For practical times, you don't want to be leaving the wafers in there any more than a few hours, four hours, six hours, whatever. You're limited to things on the order of 1,000 to 2,000 angstroms, from a practical point of view, where people grow dry oxides. Of course, you could leave it in the furnace for days. But that's not very economical in terms of use of the furnace, use of electricity, and all the resources. So if you want to grow an oxide in this thickness range, you use a dry oxide. If you want to go well above that, you should be going to wet oxidation because the oxidation rate constants are so much higher. In fact, on the next slide, slide 14, you can see the oxide thickness versus time for wet ambient, for moisture ambient using Deal-Grove. And now, look at the time of say two or three hours, instead of growing 1,000 angstroms or so, at 1,100 degrees, you're growing micron here. After about two hours at 1,100 degrees in wet ambient, about a micron, so much faster than dry oxidation, orders of magnitude faster. And as it turns out, the physical reason why is that the solubility, C star, of water vapor is much higher than the solubility of oxygen, of the oxygen molecule, for various reasons. Just because of the configuration of the molecule and the size of the interstitial spaces, water vapor is just easier to get in there. So it has a much higher solubility. So that whole concentration can be elevated, and you can oxidize it at a much faster rate. So if you need a thick oxide, you use moisture. Yeah, question. STUDENT: When B of K is given, so all of the other side, where is the oxide that doesn't react? What would it be then? JUDY HOYT: OK, so the question is going back to, let's say, slide 11 maybe of these limiting cases. What happens in a real case, if you have an oxidant diffusing through the oxide? And does it all react? Is there any unreactant species at the interface? Basically, it does have to react. And then if the reaction rate is what's limiting you, then you're sort of in this regime. But there will be a certain amount of oxidant that doesn't react. And that can lead to perhaps excess charges and things like that, some of the more realistic aspects of the oxidation process that you're left over. So it can affect the quality of the oxide, to a certain extent. And what you generally find is that wet oxides-- you don't see a whole lot of people growing gate oxides by wet. Well, because they want thinner. But even in the old days, when a gate oxide was thicker, 500 angstroms or whatever, you didn't see people growing them in a wet ambient. And the oxidation rate is a lot faster. The quality of the oxide, the number of interface states and things like that, tends to be higher for a wet oxide. So typically, what you do if you're trying to grow a gate oxide, is you grow it dry. Or if you want to make it thick, and you have practical limitations, you grow a dry step, a wet step, and then a dry step at the end. So the interface was always grown in a dry step to try to reduce the interface state density. So if you need a gate oxide that's 1000 angstrom, you do what they call a dry-wet-dry. So it does matter if the oxidation rate-- being too high is not the greatest thing in the world in terms of the device properties. I think was there a hand in the back? Or did I miss a question? OK. OK. Good. So those are the wet oxide kinetics. Let's go on to slide number 15 and just show some examples. Now right off the bat, I'm unfortunately showing an example to which the Deal-Grove model doesn't perfectly apply. But we'll apply it anyway. And next time, we'll talk about why it's not perfect. But there is a process called-- just to make this a little more interesting-- called recessed LOCOS local oxidation, which can be used to provide this lateral isolation between a device A and device B. We need to grow an oxide here. And it results in a more planar surface than standard LOCOS. Remember, standard LOCOS look like this. When we're done, and you strip off this nitride up in the upper right, you have a very non planar surface introduced. You've got this region up here, where the oxide pushed up. And so the surface is not very planar. Recessed LOCOS, so that's the basic idea is you etch a trench first. So here, I've etched in the silicon. Around the silicon nitride, I've etched a half micron deep trench. And now, we assume I'm going to subject that wafer to thermal oxidation. Well, on the left and the right side of the trench, that should say silicon nitride. Sorry, this left and right side should both be silicon nitride. There's no oxidation taking place. So maybe correct this handout on the right, where it's pointing to SiO2, that should say silicon nitride, Si3N4. So we're only oxidizing, we're assuming, in this trench region. Now, this schematic is highly schematic. It's not including any bird's beak effects, which we'll talk about where they originate from, or any two-dimensional oxidation. We're going to do a very simple calculation of how long would it take to fill that to make a planar surface, assuming I was half micron down. Yeah. STUDENT: After doing that, will it [INAUDIBLE] JUDY HOYT: There is in between. Yeah, it's just the arrow got displaced. This arrow is actually pointing to the top region, which is the silicon nitride. There is, as was just pointed out by someone in the class here, this hatched region is meant to represent the SiO2, what they call the Pad Ox, or the pad oxide, and if it was stressed, that's induced by the nitride. So there is, in fact, a thin pad oxide underneath the silicon nitride. But what's preventing the oxidation from taking place is this silicon nitride. SiO2 itself won't prevent oxidation. The oxidant will diffuse right through it. The nice thing about silicon nitride is that it's hard to get oxygen and moisture to diffuse through it. So it can act as a mask or as a blocking layer. So you don't oxidize underneath it, the first word. So this oxide that's shown here underneath it happened before the nitride was put down. OK, good point. So let's go on to slide 15. And here's just an example of a question. So we're assuming we've etched half micron down into the wafer below the original surface prior to oxidizing. And a simple question might be, how long do I have to put it in the furnace at 1,000 degrees, assuming water oxidation in H2O to produce a planar surface? Again, we're going to ignore any of the bird's beak effects or the stress effects that we know are there. We'll talk about those next time. So but just to make it a little more interesting, we want to fill this up to the surface, the original surface, and then make it planar. So we have to be a little careful in doing this because remember, as I'm growing this oxide right here, I'm consuming the silicon underneath it. So you're consuming silicon underneath it. But the oxide is also growing up at the same time. So it's kind of starting here, very thin. And it's kind of going like this. At some point, it will be just flat or close to being almost flat with the original surface. So how do we figure this out? Well, you make yourself a little schematic diagram as shown here in color in the lower left. And in a rough sense, for every micron of silicon I consume-- so as you start oxidizing, I'm going to start bringing this trench point lower. Every micron I go down, 2 and 1/2 microns, or 2.2., I'm sorry, microns of silicon is grown, of silicon dioxide, SiO2. So you can write this mathematically. If I say Y is the thickness of the silicon that was consumed, then 2.2 times Y is X0. That's the thickness of the SiO2. So that's just due to the volume expansion when you grow oxide. So we can have that as one equation that describes what's going on. Slide 17, so we remember that equation. But then we also know that the total thickness of oxide grown, in order to satisfy this requirement that we fill the trench, well, we want to a specific case where X0 is equal to 1/2 micron, which is the original trench depth, plus Y, which is the thickness of silicon consumed. So by looking at this geometry, so the distance from here to here is half micron. That was my original trench. Plus the silicon is going to get consumed down a little bit. And that, we're calling the variable Y. We don't know that number yet. So we have two equations here. This X0 equals 0.5 micron plus Y. And we also have the constraint that 2.2 times Y is X0. And that's just from the volume expansion that we know. So if we can simply now just equate these two equations, we have two equations. And we're going to set X0 in equation 1 equal to X0 in equation 2. And then you can simply solve for Y. So Y turns out to be about 0.4 micron. So that says, we need to consume about 0.4 microns in order to just fill the trench up to the original surface. So I'm going to actually go down here. From the original trench depth, I'm actually going to consume 0.4 microns down into the wafer. So X0, then the oxide thickness must be 2.2 times that. So it's about 0.9 microns. So we really have the answer. We know we need to grow at 1,000 degrees, 0.9 microns. We were told we can do it in a wet ambient. So you can calculate B and B over A using the table in table 6.2 of your text. So you know this is the Deal-Grove equation. You can just calculate B and B over A and solve for the time. Or if you want to go, on the next slide, I'll show you use the graph. So on slide 18, these are just the oxidation kinetics or oxide thickness versus time in a wet ambient. And I was told we were at 1,000 degrees. So that's this line right here. And we know we need to grow a 0.9 microns. So that's this black line, horizontal line. And you can get it roughly here. It's just 3.8 or so hours, depending on how accurately you can read off the plot. If you want to get a little more accurate, you go ahead and calculate B and B over A and plug in the equation. But you can always check the plot. That way, you have a sanity check that you made didn't make some calculator mistake or type in the wrong number. So that seems like a reasonable example of how you would do a simple calculation. So let's go on now to slide 19. So that was a Deal-Grove model. Immediately, at the same time Deal and Grove published it, they realized that there was a major problem with the model, even when it was first proposed. It doesn't correctly model what they knew to be the thin oxide growth kinetics. And you can see that in this plot. This is a plot I took out of Andy Grove's book published quite some time ago, but still a famous book, oxide thickness in microns as a function of time. This was for dry oxide grown at 700, very low temperature. And these little bullets or these open symbols are what they measured as the oxide thickness. And, in fact, what they noticed is that the model itself, which would be in this range, the dry oxides grow much faster for thicknesses below about 20 nanometers, or 0.02 microns, than predicted by the linear parabolic model. So the linear parabolic model would have had them sort of going like this. And here, these oxides are already going up much faster. So there's a rapid initial oxidation phase, which does not take into account when you have a linear model in the beginning and then a parabolic model. So people knew right off that this really wasn't going to model the initial oxidation kinetics very well. So if we can go on to slide 20 then, since that time, there have been a number of people have made suggestions to explain why the initial kinetics are faster than what Deal-Grove model would predict. There isn't really one model that's widely totally always accepted. There's a couple of them that are used in simulators today. So if you go to the Supreme Four Manual, and you read up about what thin oxide models are available for you to use in your numerical simulations, here's an example of three of them. There's the Reisman model, Han-Helms, and the Massoud model. So I want to go through these three there maybe the most popular or well known of all the proposals. So on slide 21, we have the Reisman model. And they actually proposed a relatively simple power law. But it seems and some people regard it as more of a mathematical fit that quote unquote, fits the data for dry O2 over a wide range of thicknesses. And so their power law is actually looks different from the Deal-Grove model. They say the oxide thickness X0 is some constant A times T plus TI. TI is the initial oxide thickness or time corresponding to the initial oxide to the B power. Or you can rearrange it so it looks something like this in terms of A and B. Look at this compared to Deal-Grove. You can see it is actually quite different, although they still have a couple of constants. They have a lowercase a and a lowercase b for a given set of process conditions. So similar to Deal-Grove, you have two constants. There are different numbers. This actually works. The fit mathematically fits better in the thin oxide regime. The problem is it's not as appealing in the sense it's not as quite as simple as Deal-Grove. But their physical reasoning was that the interface reaction controls the oxidation process at all times. This is what they thought. But they believed that this volume expansion, the need for the volume expansion and the flow of the oxide, the interface, control the growth kinetics, the rate. And A and B were believed to be some time-dependent flow parameters for the oxide. So not as obvious or as simple as appealing as the simple Deal-Grove model. But they also found that mathematically, it just makes a better fit. So people do sometimes use this type of model and then fit their own data with their own A and B constants. So that's the Riesman model. Han and Helms came along and got additional data. And they actually said this. They actually said that there are two in parallel, a couple of different oxidation processes taking place in the model. So perhaps, they said, not only O2 molecules, but oxygen atoms can diffuse through the oxide, and maybe in parallel. Or maybe there's a diffusion of oxygen vacancies. And so you have something diffusing in and something else diffusing out. And nevertheless, they said there are two species that you need to take into account. And they have associated reactions at the interfaces. So it looks very much like Deal-Grove. In fact, this is Deal-Grove. If you express Deal-Grove, remember, in terms of an oxidation rate, it's just B divided by 2X0 plus A. This looks a lot like Deal-Grove. But they've added a second term. And this second term here is to correspond to that second oxidation process happening in parallel. And they propose that all rate constants would be of an Arrhenius type nature, based on the data that they fit. It's just like Deal-Grove. But, in fact, so you say, well, you should double the number of constants in Deal-Grove. So instead of two, you should have four. In fact, what they found, they only really needed three parameters to fit the data. So they needed a B1 associated with this process and B2 associated with this process and B2 over A2. So they found A1 equal to roughly 0. So those of you who are skeptical in the audience will say, well, OK, Deal-Grove had two parameters. And Han and Helms have three. So if you add more and more parameters to a fit, you can always do a better job mathematically of fitting data. Everybody kind of knows that from your mathematical experience. So you could be skeptical and say, all right. They just add another constant. And perhaps that's true because the truth is, there isn't really any definitive data saying what the species that are diffusing through there. But nevertheless, they did come up with a model that's reasonably simple that does a better job in the thin oxide kinetics. So actually, let's compare these two models just on slide 23 with a Deal-Grove model. So the x-axis here is oxide thickness in microns. And this is time. I'm sorry, the y-axis is thickness. The x-axis is time. And this is plotted for atmospheric pressure dry O2 at 800 degrees. And so let's look at the different models. This line right here, unfortunately, they're all the same color. I apologize. But you can see where the arrow is pointing to the Reisman model looks something like this. Han and Helms, amazingly, different mathematical equations, but they look pretty darn close. They're not exactly on top of each other in this thin oxide regime. But they're pretty darn close. The Deal-Grove model with tau equals 0 looks like this and with tau equals 8 looks like that. Doesn't actually approach either one of them very well until you get into the thicker regime up here. But in this thin regime, say below 200 angstroms or 0.02 microns, Deal-Grove doesn't really approach either one. And Han and Helms and Reisman do actually approach the data quite well. They will converge though when you get to oxides thicker than 200 angstroms. There's one more model that people actually are very fond of. And I'll talk about that now. That's shown on slide 24. This was the Massoud model. This was published quite some time ago at the electrochemical society. And the neat thing about Massoud's experiment was they actually measured in situ. In the furnace, they actually measured the thickness of the oxide growing as a function of time. So the other people like Deal-Grove and Han and Helms, all those people, they put a couple of wafers in and take them out, put them in for different amounts of time. And so they get a few certain data points. How many wafers can you put in? Maybe five or 10 in an experiment. So you can't really measure the kinetics in great detail if you're just pushing and pulling wafers out of the furnace and just measuring them outside. What Massoud decided to do to study the very thin regime was actually set up-- and it was non-trivial. He hit a diffusion furnace. He had special windows placed into it where he could put laser beam. So you could put a laser beam coming in, laser beam coming out. And he could do in situ measurements by ellipsometry of the oxide as it was growing in the tube. It doesn't sound like a big deal. But it turns out, it's pretty challenging because there's all kinds of thermal expansion going on. And the wafer has to sit still and not move while this laser comes in, hits, and bounces off because in the ellipsometry, the angle of incidence is very important and all that. So designing the equipment was like a huge part of this PhD thesis. But the nice thing is, it did produce a lot of data. Look at all these data points. They're all basically give you a nice looking curve. Instead of one point here, one here, and one here, he published in situ measured oxidation rates. Now, this is the rate. So this is not the thickness. This is angstroms per minute as a function of the thickness grown. And indeed, look below 200 angstroms. All of these rates are much larger than they are where they reach steady state, they reach smaller rates. And these are at three different temperatures, 800, 900, and 1,000. And in the thin oxide regime, the rate itself appears to be going up exponentially as you get to thinner and thinner or shorter and shorter times. So interestingly, initially, it appears to be, in fact, an exponential process with time. So it's growing very rapidly. And then it starts to converge to a process that looks more linear and then eventually, parabolic. Only with that kind of special apparatus could one get enough data point density to really see that though. So this is probably the most extensive experimental study that has been done. And what he did was he took Deal-Grove, just as it came, right out of the box, and added one more term. But the term was exponential in thickness. So the oxidation rate had the Deal-Grove B divided by 2X0 plus A plus a term that went like C times exponential of minus X0 over L. So what is this? Mathematically, this is a decaying exponential. When L gets very large, L is much greater than 200 angstroms, this number goes away, right? You can always make it go to be very small compared to this term. So he wanted it all converged to Deal-Grove at thick enough oxides. But for thin oxides, this is going to dominate, this C term. And in order to do that, he had to choose L to be a certain number so that it had the right decay length. And L, he found experimentally, it should be about 7 nanometers so that the second term will decay rapidly enough by the time you get to 20 nanometers or so. In fact, this model does agree very well mathematically with experimental data. It's actually very easy to implement in simulators like Supreme 4. And so it has been found to be implemented in Supreme 4. And in terms of physical explanations, there's a few that have come out. But none has really been clearly identified as being the right reason. So for now, we use it as sort of a mathematical model or an empirical relation. It's very handy. And it can be reasonably accurate. So it's appealing in that sense. In terms of the physics and the chemistry, it's not a completely obvious. OK, so those are the three thin oxide models. And hopefully, you'll get a chance to study some of those a little bit more in one of your homework problems. So now let's vary something else. All we varied so far is the following. We varied the ambient. We said it could be dry O2 or water and temperature. What else can you vary? Well, it turns out, people often do vary the pressure. That is the partial pressure of the oxidant. If you want to grow a very thin oxide, and yet, you want to grow at a higher temperature because you want to get different electrical quality of the interface, you can't grow too fast. So you might want to slow it down by diluting. You can just take the oxygen and flow mostly nitrogen, and have oxygen flow be a very small fraction of the gas stream. That's a way of diluting the gas stream and growing at lower partial pressures. So just as a reminder, Deal-Grove predicts that the oxide growth rate should be directly proportional to the pressure of the oxidant. C star, effectively, if you can think of it, should depend on PG where PG is the bulk gas pressure. So basically, if I were to lower the gas pressure, you would expect that you could lower the oxidation rate. So just to be a little more specific, we go on to slide 27. What people have found, though, is it's not completely linear. Experimentally, if you lower the partial pressure of the oxidant, in fact, B and B over A do scale like P, where P would be the partial pressure, for water oxidation. And the rate constant B does scale like the pressure for dry O2. The one rate constant that's anomalous is the linear rate constant. The B over A for dry O2 doesn't go quite like exactly like pressure. It goes like P to the N, where N is close to 1 but not really 1. It's somewhere between 0.5. So it's a power law between 0.5 and 1. So this suggests that at least for oxygen, the case of S maybe has some dependence on pressure that's nonlinear. So these are some empirical models that you'll find in Supreme 4, for example. The linear rate constant may go some number B over A to the I times pressure where I here refers to the intrinsic values at 1 atmosphere. N here might be 0.7 to 0.8. And you notice, you specify the pressure P. So maybe you say at half an atmosphere or a tenth of an atmosphere. But we're not changing-- think of it this way. You're not actually changing the pressure in the tube. The pressure of all the gas in the tube is still an atmosphere. What we're doing is we're flowing mostly nitrogen. And we're putting a small amount of oxygen in the gas stream. So let's say I wanted to oxidize at half an atmosphere. Well, I just cut the flow. I use equal flows of nitrogen and oxygen. So we're diluting the pressure, basically. We're diluting the oxidant in the gas stream. So here's an example. Suppose I dilute it by a factor of 10 to 1. So I'm flowing 1,000 SECM of inert gas like argon or nitrogen. And I'm flowing only 100 SECM of oxygen, dry O2. This is what we would get as the oxide thickness versus time. And the solid lines were if you were to do the very brute force simple idea that it was all scaled like P. And it's just down. The rate constants are down by a factor of P, where p here would be a 1/10 the pressure of atmospheric pressure. The dashed line is where you use a little bit more sophisticated model where, in fact, the B parameter scales like pressure. But the B over A parameter scales like pressure at the 0.8, so just slightly. So in fact, the kinetics would be slightly faster, the oxide would be growing slightly faster, than you would have predicted using very simple dependence on just saying it depends strictly on P. So it gives you a little bit more accuracy using some of these more empirical models. But again, the idea here is, if for some reason, you need to control the oxidation rate better, you want to slow it down, you just dilute the oxygen in the furnace. Oh, I just showed some specific examples here of trying to read off what the difference would be. You'd make an error of about a 100 angstroms if you use the simple assumption compared to what's a little bit more accurate here at 1,000 degrees. OK, so let me just get through. That's the standard kinetics on slide 29. Let's go on to slide 30 then. OK, so what I just talked about was trying to dilute the oxidant so you can grow slower at a given temperature. On slide 30, we're doing just the opposite now. It's not so easy to do from a physical point of view. But you can imagine growing at higher than 1 pressure. Let's say the pressure tells you what? It tells you the number of atoms or molecules, species hitting the surface per unit time. So if I double that pressure, I can double the amount there hitting that surface. I can double the flux to the surface. So you expect the growth rate to go up. It's just not in your intuition is how I could do that. Here, you have a tube that's made of quartz. I have it open on both ends to the ambient. And I'm flowing some gas to it. How are you going to get a higher pressure? We're all at atmospheric pressure. It doesn't work. But you can do it. You can build a system where outside that quartz tube, there's a big envelope made of stainless steel like a diver's tank. They have training tanks for deep sea divers. They put these guys in these tanks. They seal it up like a submarine. And they can stuff more and more gas in there, and you can increase the pressure on the diver. So that's exactly what people do with high pressure oxidation. It's a very special type of equipment. It's not something you're going to find in a standard fab. They are available though. They've been commercially made. And people call it HiPOX, the High Pressure Oxidation System. So here's an example of how the kinetics would go up. And this I took from Mayer and Lau's book, 1990. What they're showing here is on a log plot here, a semi-log, so it's log on the Y, linearly on the X, oxide thickness versus time. And this is 100 in steam. And they're looking at versus temperature-- I'm sorry, looking at a couple different curves here at different pressures. So at atmospheric pressure, it looks like this. At 5 atmospheres or 10 atmospheres, it looks like this. Here we are 20 atmospheres. Look how much faster we can grow. So this can be used to increase the oxidation rate at low temperatures. If you need to grow a thick oxide thermally, you need to have the properties of a thermal oxide, which are quite unique. Its density is well defined. It has much better electrical properties than you would get by depositing an oxide. It actually consumes silicon, which is not true of a deposited oxide. Then you have to go to a hypoxic process. Or let's say you have some structure on the surface like silicon germanium or some material that can't go to high temperature because it'll cause the material to diffuse or to relax, or you're trying to limit the amount of diffusion in the substrate. Well, and you need to grow a thick oxide, you can do it in a hypoxic machine at 800, where you never would be able to do grow a half a micron at 800. But you can do it in HiPOX at 20 atmospheres. So for materials and processes that are getting limited by temperature and they have to go lower and lower in temperature, HiPOX is a way of increasing the oxidation rate. So in fact, here are some data shown on slide 31 on HiPOX I took from Simon Z's book. And what it shows is the measurements on the y-axis, the parabolic rate constant-- so this is the B parameter-- as a function of 1,000 over T-- so it's a semi-log kind of Arrhenius plot-- in steam. And this has been measured for a couple of different orientations of silicon. And what you see here is this solid line, which has two different activation energies. There seemed to be a break point here somewhere around 950 degrees but actual activation energy close to one electron volt. This is the B parameter. Here at 5, 10, 15 and 20 atmospheres, this is what the B parameter goes up by. And in fact, I marked this blue line, which is at 800 degrees. And in fact, I'm reading off the chart here, the B parameter is almost exactly a factor of 20 faster at 20 atmospheres than it is at 1 atmosphere. So we can go up to 20 times the oxidation rate at 20 atmospheres at 800 degrees. So it does scale reasonably well, this parameter does, with pressure at higher pressures. On slide 32, I'm actually showing you-- remember, I said, you can build this complex diver tank looking thing with a great big steel vacuum system that can hold the pressure and pressurization system. I've taken this from Simon Z's book. He actually talks about how you would do a HiPOX run, if you had such a thing. Just so you can think about it, what happens is the wafers get loaded here at atmospheric pressure. Down here, this solid bar, it means you're at atmospheric pressure. Obviously, you can't pump up the diver tank until the hatch is closed. So you load them in atmospheric pressure. And you have a sealing step. And here, you're starting to increase the pressure, again, using O2. You do some purging. And then at some point, what you do is you bring this outside steel shell and the tube up together in pressure. And you go up to, say 10 atmospheres. So your pressurizing in moisture. Then you do your oxidation. Then you depressurize and bring it down to atmospheric pressure and unload. So it's perfectly possible to do this. Obviously, if you try to do it with a quartz tube and put end caps on it and pressurize it, of course, the quartz would explode all over because it's not a strong material. But if you have the stainless steel shroud all around it, then the differential pressure across the quartz is very close to 0. Or it can be just maybe one atmosphere. We know it can withstand 1 atmosphere. But that way, you keep the quartz from exploding all over the place. So it's a special piece of equipment. OK, so then let's go on to slide 33, so HiPOX, we can do low pressures. You can do high pressures if you have the right equipment. It turns out, simple oxidation also isn't always done just with oxygen or water vapor. Small concentrations of certain other species are often added. For example, HCl is sometimes added to the O2 because it turns out, people have found by adding that, they can reduce oxide defect densities or they can reduce contamination levels. In the liquid form, HCl is very good at etching off metals from the surface. So perhaps, it has a similar role when it's in the vapor phase. So sometimes, people add a small amount of HCl to the furnace. It can produce chlorine, which can react with maybe trace metals by this kind of process that's shown here. So it's not unusual to see, in manufacturing processes oxidation, in not just O2 but HCl, O2 plus HCL. So these bullets here in the middle of the slide are just some generic kind of observations that have been made on these different ambients. Well, just studied, in the Deal-Grove, we know in detail that you can get 20 to 50 times faster than dry O2 just by putting in wet O2 in steam. So we know that. 3% chlorine in the ambient can increase the growth rate by 20% to 30%. So also, HCl is a way of, at a low temperature, is boosting the growth rate. The name of the game these days is keep the dopants from moving. But keep the growth rate high enough to give you a reasonable thickness. So you can add a little chlorine to speed up the growth rate. You can also add NF3, another commonly used species. A small amount of that will increase the growth rate by two to three times. Again, this is adding in a lot of new chemistry. And there's no way in an AB initio, in a first principle sense, to really calculate what those rates. Are so in simulators like Supreme 4, if you go sit down and tell it, oh, I want to grow an oxide in chlorine and oxygen. It doesn't simulate the chemical reaction and all the kinetics and, come on, spit out a number. It has a lookup table of B and B over A values that have been measured in the literature for commonly used ambients like chlorine and maybe NF3 and a few others. So it looks up in the table what B over A and B should be. As a function of the pressure here, this pressure, delta or chi, these are some functions of the concentration of this additional species that's in the furnace. Could it be HCl or whatever. And they've been determined very empirically. So don't expect first principles models, but maybe some mathematical models would be in Supreme for different ambience. But again, they've all been calibrated to somebody's experiment. So they may not exactly agree with your experiment. You might have to calibrate it yourself. Here's an interesting example on slide 34. These are some kinetics in-- HCl has been used for quite a while. It's even old fashioned now. Something that's a little bit newer is people oxidizing an ambience like NO, N2O, and even implanting nitrogen. Remember, we talked about oxynitrides are the first form of high K. Oxynitride, which is a mixture of silicon, oxygen, and nitrogen, has a higher dielectric constant. You can make it a little bit thicker. And so you get less gate leakage. So gate oxides of oxynitride were are popular for people to study. But here's an example of trying to fabricate an oxynitride where the kinetics are really crazy. They're not at all linear parabolic. And in fact, this is a plot here on the y-axis is oxide thickness or oxynitride thickness, if you want, as a function of the nitrogen dose. So here, the substrate was implanted first with nitrogen-- and we'll talk about ion implantation-- to a certain number of atoms per square centimeter. And they're plotting the oxide thickness here at different times, so say, for 25 minutes as a function of dose. And you can see, depending on how much nitrogen you implanted, the thickness actually can go down. So the oxidation rate can depend on exactly what is at that surface. Was it silicon pure? Or was it silicon that had been ion implanted with something? So this needs a qualitatively very different model from Deal-Grove because you need to specify something that the Deal-Grove doesn't even treat. Deal-Grove assumed you had a perfectly pure silicon wafer that you were oxidizing with just silicon atoms in it. Once you start implanting nitrogen or some other species, all bets are off. You're not going to widely find simulators that can support this type of model. If you want to do this, you're going to have to model it yourself by taking data from the literature and fitting it or maybe doing your own experiments. So let's look at slide 35. There's another parameter we need to vary here. You can imagine varying. And this is a relatively simple one. And that's the orientation of the crystal, the face of the crystal that you're oxidizing. We know that the B parameter, which depends on the diffusion through the oxide, is independent of orientation. B over A, which is the reaction rate at the interface, of course, it's going to depend on the number of atoms available per square centimeter to react. So it does depend on orientation. So and these are the dependencies, just to give you rough numbers or relationship. 111 is the fastest. It has the greatest number of atoms per square centimeter. 110 is second. And the slowest is 100. And in fact, these are the relationships. If we take the 111 rates and we divide them by 1.68, we get the 100 rates. And the relationship between 110 is in between. And again, it's related to the silicone atomic density, the number of atoms per square centimeter that are available. And you can believe that because in order to have the surface reaction take place, silicone bonds have to be broken. So and the number of atoms available is going to be important in that case. Oh, slide 36 is an interesting example of how even, you might say, well, I only use 100 wafers. What do I care about oxidation on 110? Well, you will care because you often etch a trench and then need to oxidize the trench. And of course, the trench has different orientations of its walls. So this is a Supreme 4 simulation of a trench that was etched into a 100 wafer. So 100 means what? It means the surface of this wafer, this plane, at the very surface, is 100. The 100 direction or 01 is pointing up. But if once I start etching into it, the planes that I expose will be different orientations, depending on how I design my trench, if I etch into the crystal. So for example, I etch a trench here down into the crystal. The bottom face is 100 because the bottom face is the same as the top face, the face of the wafer. But the sidewall, this face right here, the sidewall that I've exposed, depending on how I oriented this trench, which is assumed to be square or rectangular, the sidewall face could be a 110, depending on how I orient it with respect to the flat. So you can immediately expect a different oxidation rate on this face right here from the oxidation on the bottom. And this is an example of a simulation that was done 30 minutes in 900 degrees C in water. And you see the thicker oxide that grew on the sidewall. That's pretty much-- in this bulk region of the sidewall-- that's entirely due to the fact that it's a different orientation. So this simulator took that into account. It's a little bit thicker here than it is on the bottom. What is this, this weird looking corner effect? That's not orientation so much. This actually shows a two dimensional effect we'll talk about next time, which is a stress effect. When you're oxidizing a corner, again, the oxide has to expand. In a corner, it's a little bit hard to do some expansion. We're just going to turn out, the oxidation rate in these sharp corners, it's going to end up being slower due to stress. So that was taken account into the simulation. So let me just summarize what we have from today. The basic growth mechanism is transport through the oxide by diffusion and then chemical reaction at the interface as opposed to the silicon doesn't come out of the lattice and go up to the surface. We generally have a linear/parabolic law when we have planar oxidation. Next time, we're going to see there are a lot of different cases where it's non planar and it's not just simple linear parabolic. The Deal-Grove model isn't completely accurate in the thin oxide regime. But it can be corrected by using Han and Helms or Riesman or you can use Massoud. And they all can be designed to converge to Deal-Grove for when you're at thinner and thicker oxides, say above 200 angstroms. And the growth rate varies with orientation. 111 is the fastest and 100 being the slowest. So that's mostly what I wanted to cover for today. I have a couple of things for you though. Your homework is going back. So you can come up and grab your homework. It's been graded, up here, homework number one. And the solutions are also up front. So if you can come on up and take the homework and the solutions for homework number one, that's good. If you weren't here last time, you didn't get to see the Intel wafer, I'm going to hand it back after today. So you're welcome to take a look at it. Oh, these are not in any particular order. STUDENT: [INAUDIBLE] JUDY HOYT: What? STUDENT: People extracted the strained silicon? JUDY HOYT: Yeah, people tried. Jeff tried. It seems to be pretty darn close, a change of 1% in the lattice constant because I really haven't changed the number of atoms per square centimeter. What does change is as soon as you hit the silicon germanium, oxidation rate goes way up. STUDENT: [INAUDIBLE] |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 19_Thin_Film_Deposition_and_Epitaxy_Modeling_Topography_of_Deposition.txt | JUDY HOYT: Go ahead and get started. We want to start with looking at the schedule, get ourselves reoriented. This is November 16. And we'll have our final lecture today on chapter 9. Chapter 9 has three lectures. We'll talk about modeling the topography of deposition. And you're working on-- hopefully everybody's working on homework number five. That homework is due next lecture on the 18th. And does anybody have any questions on the homework? Or if you do, make sure you see me after class or send me an email. In addition to the homework, everyone has now-- except one person on the clipboard-- has a topic. And I had my assistant type it up. I'll put it on the web. Everybody's signed up for a final report topic. And you can verify that she got it right. And then if you're doing an oral presentation, this week I'll start making up a schedule so you know which day you're going to speak. OK. So hopefully everybody recovered from the snowstorm we had over the weekend. And we'll go ahead and start this lecture with handout number 32. As I mentioned, this is the final lecture on chapter 9. So far we've by way of thin film deposition and epitaxy we've talked at least a little bit about silicon epitaxial growth. We discussed the low pressure chemical vapor deposition of the three most important films in front end processing. That's polysilicon, oxide, low temperature oxide, and silicon nitride. And we introduced some PVD techniques such as sputtering. We talked about systems for DC and RF sputtering, which is the most commonly used PVD technique in modern fabs. The nice thing about it is it can be adjusted to give reasonably good step coverage. This is in contrast to evaporation. And thermal evaporation is typically not used in CMOS processing. Today, what I want to talk about, though, is modeling of these processes. And you'll see how these models are a little bit different. They're much more quasi-empirical than the models we had for ion implantation. But there are methods to do modeling of these processes. And we'll give a couple of specific examples. OK, let's go on to slide number 2. Within the last 10 to 15 years ago or so, there have been a number of simulation tools have been developed for topography simulation. Now, I want to make clear, topography simulation is not what SUPREM-IV does. You've all been using SUPREM-IV to model ion implantation, oxidation, diffusion, a lot of the front end processing. SUPREM-IV doesn't have any topography simulation of any sort. If you tell it to make an etch, it assumes it's etched at a certain angle. You tell it the angle. It doesn't make any effort to calculate those angles. Or if you tell it that something is deposited, it assumes conformable deposition unless you say otherwise. So it's not a simulator for topography. There are other simulators. And one of the ones that also came out of Stanford is called SPEEDIE. And there are some examples in this lecture from the SPEEDIE simulator and from in your textbook. OK, so what is it that we need to think about if we want to do topography simulation? Let's say we are trying to model the topography of deposition of some film. This particular example on slide 2 is shown assuming we're doing deposition of aluminum, say, in a sputtering chamber. And this is the surface that we're depositing the aluminum, for instance, onto. That surface has some starting initial topography. Well, this is a generalized picture of what's involved in the deposition. And you'll see that modeling topography ends up being a lot of bookkeeping of the different fluxes involved. In general, we are not going to consider gas phase boundary layer in these simple models. So this picture will not be accurate, for example, atmospheric pressure CVD where diffusion through a boundary layer is very important. We're not taking that into account. But it will be reasonably useful for modeling low pressure CVD or for modeling sputtering, which also takes place at reasonably low pressures. So these are the fluxes involved. For example, if we're depositing aluminum on this surface, we have a direct flux to the surface of neutral aluminum atoms. We also have a direct flux of ions that we need to keep track of. Aluminum can come down and it can actually-- we can have a desorb. It can desorb aluminum that had already been deposited at a point. And then we have a desorbed or emitted flux that can be redeposited elsewhere on the surface. So that's a flux we need to take account of. Argon ions, because again this is in the sputtering system, argon is nominally supposed to be directed towards the target, not towards the wafer. But it's possible that it would be directed towards the wafer, argon ion, some of them, and that they would sputter off aluminum that had already been deposited. And then that can redeposit elsewhere. And there's a surface diffusion flux. Aluminum can diffuse along the surface a certain distance and then deposit at a different point. So we end up just doing a lot of bookkeeping on all the different fluxes. And it'd be good if you think about this picture because we're going to use the exact same picture when we talk about etching. When you think about it in this terminology, etching is sort of just the opposite. Instead of depositing, we're talking about fluxes coming down and reacting and removing films. But it's not that much difference. It's mathematically the inverse of deposition. So in order to simulate that process that I just showed on slide 3 here, we need some kind of mathematical description of all of these fluxes. And then we need to add them all up and see what we get at any given point. So what we write is this equation shown at the top of slide 3 that we say the net flux at any point I. So point I, point I is just-- if I go back one, point I is just at some point on the surface. It could be at this point right here. It could be at that point right there. It's just at some point on the surface. The net flux is, well, it's the flux deposited at that point minus everything that was emitted, right? And that gives you the total atoms per square centimeter per unit time. So you add up the direct flux of neutrals, the direct flux of ions, any flux that's redeposited that's coming off from another area, any flux that's due to surface diffusion. All of those are positive terms. You subtract off any flux that's emitted from that point, any flux that diffuses out or that sputter from that point by virtue of argon ions coming down or something. So you can see, this is going to amount to basically a lot of bookkeeping. And this is something that computers are very good at doing. And you can and you can model those fluxes across the surface. So usually, what we need to do for a specific system-- let' say you're doing LPCVD or something-- we have to figure out which one of these fluxes is important and ignore all the others so we can simplify the models to a certain extent. And this picture gives you a rough idea of how we do the modeling. What we consider is we draw this dashed line here right above the surface of the wafer. And so it's a plane just above the surface. And the first thing we need to look at is what's the arrival angle distribution of atoms and ions that are coming down on that plane? And then later on, the next step will take into account topography. So first, you do it assuming you have a plane just above the surface. And now what we need to take into account are these direct flux of neutrals and the direct flux of ions. These we generally model with some kind of arrival angle distribution just above the wafer. And I've noted here what it says is it doesn't model the equipment. What that means is you have to put in the arrival angle distribution. You can't say to the simulator, oh, yes, I have an applied materials sputtering tool in the endura, and now tell me what it's going to look like. It doesn't have that level of knowledge of the individual tools. What you do is you put in flux, the direct flux, as a function of the angle theta and that's measured with respect to the normal. And you give it usually-- it's some normal flux times a cosine theta to the n power. So that's a variable that you need to put into the simulator. But there are some guidelines for different systems, different types of systems, what kind of arrival angle distribution for neutrals and ions you might use. So let's look at a couple different types of flux distributions. The first one here shown on the slide number 4 is what we call isotropic flux arrival. As the name suggests, the flux is essentially isotropic. And in this we use an n equals 1 in the formula of cosine theta to the n. So what we get is that the direct flux f as a function of angle theta. So if theta is measured with respect to the normal, is whatever the flux is in the normal direction f0. So it's this flux number here times cosine of theta to the n. So what's happening is I'm moving from normal. I have a certain flux in. And as I go this direction, as I coming in at shallower and shallower angles, the flux goes down according to this cosine theta. And this little derivation that I've shown mathematically, the second half of slide 4 kind of explains where that comes from. What we're interested in when we're looking at deposition, we're interested in the normal component of the flux, OK? So let's say this is my wafer down here. This solid line here represents the wafer or the surface that I'm depositing on. And if we look at this little red region, that represents a certain normal flux coming. And that would be my f super 0. And this, I'm drawing this for the n equals 1 case. So I have a certain flux coming in normal in normal. Now, at some other angle theta I have that same flux. You notice I drew those same number of arrows. So it's coming in this direction normal to this plane here, normal to this plane. I have the same number of atoms per square centimeter per unit time. But I need to project that now onto this surface. So when I project this distance here of length l onto the surface, I have this sort of distance-- it's spread out over a lateral dimension h. Now, we have right triangles here with angle theta here and theta here. And you can write-- it's very simply that the cosine of the angle theta is just this distance l divided by the distance h, where h represents how far that unit flux is spread across the wafer. OK, so now it's just bookkeeping. If I want to know what is the flux here in this region, it's just a number per unit area per unit time that's going to be proportional to the number per unit time divided by h, right, because it's spread out over the distance h. But you can substitute in for h is cosine theta over l or l over cosine theta, Right so you can substitute that in. So you get the fact that for isotropic, so assuming you have just as much intensity in this red bar as I had in this direction theta in the black bar, assuming that, then I get this simple formula, which shows that the flux as a function of theta is just the normal flux times the cosine of theta. And that's assuming that the source is isotropic like this. If for some reason your source varies, and you have much greater intensity over here and over here or whatever, this cosine theta term will be modified. But again, this is assuming isotropic, an isotropic source. And the cosine theta just comes from the geometry factor. OK, you can sit down and stare at that for a little while. Hopefully, it'll make sense to you. Go to slide 5. So in contrast to that, on the left side I show what I just derived, which is the isotropic flux arrival distribution. On the right side, for some systems you may have very anisotropic flux. Most of the atoms or ions coming down may be directed mostly normal to the surface. There may be very few coming out at shallow angles depending on the particular machine that you're using. So here in the cosine theta to the n, we make n greater than 1. And you can see just based on the way these arrows are distributed most of them are coming in normal. The flux coming in at angles, higher angles theta, is much smaller. You can see that from the length of the arrow. So the length of these arrows here is supposed to represent the flux at any given angle. So generally, we have this arrival angle distribution at a plane just above the wafer. It has some cosine theta to the n type of dependence. And again, it's the normal component, normal to the surface, that strikes the surface that determines the deposition rate. So let's take a look at examples of distribution of flux arrival distribution functions. Here's that formula I just mentioned. And for example, if you have a high pressure system, reasonably high pressure, you're going to have a lot of gas phase collisions, which means you have a short mean free path. And you're going to tend to get isotropic arrival because those atoms are all bouncing around. And regardless of where they're coming from, it's just sort of the flux is equally distributed for all angles. If you're in a very low pressure system where you have very fewer gas phase collisions, a long the mean free path, and is going to be closer to 1, you're going to get more like line of sight from the source, whatever that source happens to be. It could be a point source. It could be a plane if it's sputtering. But you're going to have n greater than 1. That's all assuming neutrals. If you have ionic species and the wafer is biased, imagine the wafer has some bias and you have an ion, the ion can be attracted to it by the electric field, OK, that's set up. In that case, for ionized species, it tends to be very anisotropic because, again, there's a force driving it towards the wafer so it doesn't tend to bounce around. The species tend to go right to the wafer. So these are just some examples of flux distribution so you can see what it looks like for different values of n So this is the relative flux. So it's the flux at any given angle theta divided by the normal flux, divided by f0. And this is what it looks like. So it starts here at theta equals 0. So that's normal is one. And you can see for n equals 1 it drops down and the flux finally goes to 0. At 90 degrees, you're just skimming along the surface. Obviously, the normal component is 0. n equals 3, which is slight anisotropy. It looks something like this. It's starting to flux really dies down dramatically after 60 degrees. n equals 15, which could be appropriate for some ionized deposition systems, you can see really the flux goes to 0 by the time you get to about 30 degrees. So it's very anisotropic arrival at that plane. And again, in most models, these are inputs that-- parameters that you can vary. OK, so that gives us an idea of the direct fluxes and their angular dependence. The next thing we have to do is to take into account the surface topography. And just to give you an example of that, I'm showing here-- imagine your surface is not flat, but that you have a wafer and you are sputtering aluminum into it and you have a via or you have a hole, contact hole. So it's like this, and then you have this hole right here. Well if you have imagined each one of these little regions corresponds to an emitter with a certain flux, depending at any point i on this vertical surface, it's going to have a certain viewing angle. And if you're beyond that viewing angle, you're going to be shadowed. So this is a shadowing effect on the sidewall of the trench. It's not that uncommon depending on the depth and the aspect ratio of the trench. So we have to take into account the orientation of the surface, what its angle is, this viewing angle, and shadowing. And typically, in the near certain way for surface region we will neglect the gas phase collisions on that. So if you will, you can think of this-- imagine my reference plane. Remember that reference plane is sort of a plane just above the wafer surface. And each small area on this reference plane can be treated as an emission source. So if you have a emission coming down, you have all these little emission sources. And each source is emitting to the topography below it. And then the total direct flux at any point i can be calculated by just adding up the flux from each one of these little miniature emission sources, which each one of which has its certain angular distribution depending on the type of system that you're modeling. So you can now begin to see how in a computer model you can build equations and you can build structures that would be able to model this type of deposition. Let's go on to slide 8. And I'd wanted to give you a practical example. We're talking about all these mathematical things. But a practical example, which should be somewhat intuitive, here on slide 8 is the impact of having a finite target size compared to the wafer and if you're doing sputter deposition on asymmetry across a wafer. So if you look at this illustration, imagine I have a target that I'm doing sputtering. And this could be aluminum. And it's up here. It has this dimension. It's relatively small compared to, say, I'm doing a 6 or 8 inch wafer. So what's going to happen? And you have the wafer sitting there, and you're sputtering into a trench on the left side of the wafer, and you have a dye over here on the right side of the wafer. And let's say for now you're not rotating the wafers or anything. So you can imagine what's going to happen here on this trench. This white region indicates the region that you've sputtered, the material you've sputtered. You get an asymmetry on the sidewall because you've got here at this point in the wafer you have more flux coming from this direction. And so you're going to get-- because of shadowing you're going to not get much deposition on this wall. You'll get more on this wall. On the right side of the wafer, we have just the opposite type of shadowing. So you will tend to get asymmetric deposition on the left side of the wafer as opposed to the right side of the wafer with this particular geometry. So what do people do in a real system? Because obviously this is not desirable. We want to have it-- each side of the wafer should look the same. Well you can widen the target size. So I can take this target and make it much bigger, make it larger than the wafer. Or you can take the same size target and move it further from the wafer. So in either case, you're going to get an arrival angle distribution that is more symmetric across the wafer. Another thing people sometimes do is to rotate the wafers. So you can put the wafers on a plate and that rotates with respect to the source, which is essentially similar to widening, increasing the effective size of your target. This is why sputtering systems are very large because they tend to have huge targets. Imagine today people were processing 6 inch wafers-- not 6 inch. Here at MTL we're processing 6 inch, 12 inch wafers in industry. That target has to be huge basically. And then you would be rotating the wafers. And so this is why the equipment in the fabs is getting larger and larger. You're fundamentally limited by some of these processes to obtain uniformity. And on slide 9, in fact, there's just an example of how we would ameliorate this asymmetry from the left and the right side of the wafer. Here's an example where the target's been made as large as the wafer. In fact, most targets today are much bigger than the size of the wafer. And this is also why they cost a lot of money. If you have a target of platinum, if you're sputtering platinum, it has to be 2 feet by 10 inches or something like. That's a lot of platinum. That's a very large sheet of platinum. So it costs quite a bit of money. Or if you want, instead of making a target so big you can move it further away and you'll get a little better symmetry between the left and the right side of the wafer. But that makes the machine really big. And you can only have so much size in your clean room. So they kind of trade offs here. So in practice, people use larger targets at some reasonable distance, and scale the tool size, and rotate the wafers to help. There's a practical example. OK, so we talked about how to calculate the direct flux. Basically, you're just taking-- it's like a geometry problem essentially. And you have to have a cosine theta to the end dependence. How about the indirect flux? First of all, on slide 10, what do I mean by indirect? Well, I mean all those processes that occur on the surface or near the surface of the wafer during the deposition. So these indirect fluxes include surface diffusion. So I could have an aluminum atom that's that started here. But it can diffuse along the surface and end up somewhere else. So that represents a flux to that point. Emission, species come down, but they don't always stick. They don't always stick where they land. They might actually go there and then be re-emitted. So that's an indirect flux called re-emission. Sputtering, incoming atoms can knock off-- you could have a film of aluminum already on the wafer, but incoming atoms could knock them off. And redeposition, sputtered atoms can be redeposited elsewhere. Here's an example. Aluminum comes down. It hits an aluminum atom that was already deposited on the surface. It's then desorbed or emitted and redeposited down here. So there's a number of processes. We call these indirect because they're not directly coming from the source. They're coming from other regions on the surface, but that need to be taken into account in the topography. So let's go through each one of these surface processes, the diffusion, and the redeposition, and things, and talk about how we might model them. If we go on to slide 11, to surface diffusion, well, surface diffusion-- and I'll refer to your text. I'm not going to derive this equation. But it turns out diffusion along a surface is different from diffusion in the bulk. We spent a lot of time talking about diffusion in the bulk of single crystal silicon. Surface diffusion has slightly different driving forces. Not always just driven by a gradient, but it can actually be driven by the shape of the surface. It can actually be-- diffusion can be driven by the local curvature of the surface, if you're depositing a film, in order to minimize surface free energy. So we have a different type of equation from what you're used to seeing perhaps for diffusion. We see that if there's a flux into a point on the surface, minus the diffusion flux out, so that's the net diffusion flux to a point, it actually depends on a surface diffusion coefficient d sub s, diffusivity. But it also depends on this curvature, this derivative, which has to do with the curvature of the surface. And what I would do-- I don't want to go through it in detail in class. But if you look at equation 11.37 in the associated texts, in chapter 11, they talk a little bit more about surface diffusion and where this equation comes from. But the main point is because it increases with curvature during deposition, filling in of corners may be enhanced. And you can sometimes get more smoother, more conformal deposition. Of course, you have to be at a high enough temperature that these species can diffuse. And depending on the material and the temperature, it may not be diffusing. You may have to raise the wafer temperature slightly if you want to get more surface diffusion. So you can think of it this way. If I raise the temperature of the wafer, I can smooth out and get a little more conformal deposition by virtue of surface diffusion processes. So that's something that can be modeled in topography simulators. Page 12, a little bit simpler, this is very much a bookkeeping thing. There's something called surface sticking and emission and a sticking coefficient. So there's this flux at any given point i that's emitted that arises because not all molecules stick when they arrive at the surface. And so in fact, we write the emitted flux from any point as being equal to 1 minus s sub c. So the fraction emitted is 1 minus s sub c, where s sub c is called a sticking coefficient, times the flux going in. So what is s sub c. Well, it's a very simple thing. It's the fraction that stick basically. So sticking coefficient here, s sub c, it's the flux that react and stay put divided by the total incident flux. So to give yourself an idea, if you have a high s sub c-- s sub c is a number between 0 and 1 really because it's a fraction. If you have a high s sub c, like s sub c equals 1, basically you have line of sight deposition. Everything that comes down just stops and sticks on the surface. If you have a type of deposition with a low sticking coefficient, what happens is atoms come down. They may touch the surface, but they don't stick. They don't react. They're kind of inert, in which case they'll be re-emitted. They touch somewhere else, and then they don't react. And then finally they might be emitted and stuck at this point. So when you have a low sticking coefficient, much, much less than 1-- and sticking coefficients can be 0.001-- it means that things bounce around a lot before they actually stick on the surface. So you're going to have more conformal type deposition. So this depends a lot on the deposition system, on the temperature, on the chemistry. Again, we don't model all that. We don't say, oh, I'm putting a wafer into an LPCVD system. The sticking coefficient must be 0.01 or 0.1. We don't know that for any given tool. You can put it in as a variable in the model and then you can compare that to what you get. You can compare the modeled deposition to the profile of what you get. And then you get an idea of what your sticking coefficient is. It's not a number you can calculate from a first principles point of view. But people have derived sticking coefficients for various types of deposition. OK, so slide 13, so this is the formula we said. The flux emitted is just 1 minus the sticking coefficient. It's those that don't stick times the flux incident at the point. We make some assumptions. Generally, we assume that ions stick. So if we have an ion coming down, we assume sticking coefficient is 1. Why is that? ions tend to be very reactive and they often come down with a certain amount of energy. So we usually say that they have sticking coefficient of 1. Neutrals can have any number one or less. And they're often assumed to be emitted with some kind of a cosine theta angle distribution. So that is they may have no memory of their arrival angle. So don't think of this, for example, in here, don't think of this as a little trough like a pool that we've taken all the water out, and you have a ping pong ball and it's bouncing around with billiard ball kind of collisions. It's not supposed to be thought like that. This atom is supposed to come down. And then atoms are coming down continuously, say at this point, and then it's readmitting at with some angular distribution, but it's not necessarily a billiard ball kind of collision type of thing happening. So you need to put in a re-emission flux distribution. So at any point i we get a flux of redeposited material because the emitted flux can land somewhere else on the surface. So we have this sort of, again, this ends up being bookkeeping. We have this something we express as f super ik redeposited. And what this is you take the flux, the direct flux, f sub k, that's coming into a point k on the surface-- let's say it's right here-- and we're looking at then point i. It's desorbed from point k and it's redeposited at point i. So this redeposited flux depends on some geometry factor, gik. So it has to do with the geometry between point i and point k times f sub k that's emitted, so times this emitted flux from this point k. Well, the emitted flux from point k just depends on the incoming flux to point k times the 1 minus the sticking coefficient. So the nice thing is here for point i can take any point k and figure out if I know the geometry factor between the two. How much is deposited at this point i resulting from having come from being emitted from point k? And then what you do is you take all of these at this point i you take redeposited fluxes from all points of the surface so that all points of a surface could be redepositing at this point. So you sum up for all i and k and you figure out what's deposited. So gik then accounts for the geometry between any point i and k. Because, obviously, if you're over here, let's say I'm over here, my point k. We're up at the surface, and it's re-emitting. Well, chances of it coming back down over to this point are dramatically lower than if it's re-emitting from the walls just based on geometry. OK, so let's take a look at some examples on slide 14. And hopefully this will become intuitive. But if you have a low sticking coefficient, s sub c much less than 1, you can end up with more conformal deposition or conformal coverage of the step because of the redeposition process. Redeposition and this low sticking coefficient is usually more important than surface diffusion because most of the time in these processes the wafer temperature is low enough that the surface diffusion is not all that high, especially if you're doing PVD. But let's just take a look here at the case A where I have a high sticking coefficient. So it's sticking coefficient is 1. Everything comes down and sticks. So what do I get if I'm depositing in this trench? Well, I get very non-conformal deposition. I get a film deposited at the bottom of the trench. There'd probably be a film up here too. It should have been shown. But we're only looking at what's happening in the trench. But nothing on the side walls. Why? Because I really don't have any opportunity for redeposition. Whereas, I can have that same angular arrival distribution. So with the exact same angle of arrival distribution, if I have a low sticking coefficient, things can move around a lot and hit the side walls. And then you can end up with more conformal deposition. And again, you would use this type of equation in the computer to be able to model that. You just vary the sticking coefficient. And you need to calculate the geometry factor as part of that. So low sticking coefficient is good if you want to coat a trench. Now, that may not be what you want. You may want to just deposit at the bottom. But again, it depends on the process that you're trying to do. OK, so that has to do with sticking coefficient. That's an important concept in basic surface physics. And we talked about surface diffusion. The third concept is sputtering, what happens on the surface. And that we talk about here on slide 15. There's a sputtered flux at any point i is caused by energetic usually incoming ions. Why do we care about ions? Well, the neutrals usually come down with low enough energy. They do a little bit of sputtering but not much. But an incoming ion can come in and it might have a couple hundred electron volts of energy. With enough energy, it can then hit something at the surface and cause it to be sputtered or emitted. So what we typically say is any point i the sputtered flux depends on the sputtering yield, y. y is just the number sputtered divided by the number incident. And this plot tells you how the sputtering yield varies with angle, depends on y times the flux of incident argon ions, flux of argon, and the direct flux of ions. So the total ionic flux is what we add up. So you might be curious to say, all right, well, how does the sputtering yield depend on the angle? Because that's going to tell you if you have a trench with a certain angle, and the atoms are coming in, and the ions are coming in, how is the sputtering yield going to vary. Well, this is a typical plot of yield versus angle. So again, a yield of 1 means it's 1 to 1. For every one that's incident on that point, one comes off. And you can see. And this is all measured with respect to the surface normal. So theta equals 0 means normal sputtering. So it's coming straight down. So at theta equals 0 for this particular example, the sputtering yield is one. And look what happens. As you increase theta-- so you're coming in a little more grazing, I'm increasing theta-- it peaks somewhere here around 50 or 60, 65 degrees. Gets up to about 2, 2 and 1/2, and then eventually at your very glancing incidence there's really no sputtering at all. It really drops dramatically. So this angular sensitive and you can use this to achieve more planar surfaces during a deposition. In fact, we'll show some examples where we use ionized species during deposition to give you a little better planarity. But so this angular dependence is something that we want to keep in mind. OK, and now on slide 16, so what happens? So let's say you do sputter that off, then you have redeposition of the sputtered molecules, how do you model that? The exact same way you modeled reemission when you were just coming off due to low sticking coefficient. You just say the redeposited flux between i at point i due to whatever is sputtering off from point k is just a geometry factor gik, which you which end up summing over the surface, times the sputter flux from point k. And that sputter flux from point k depends on the flux incident of ions times the sputtering yield y to see how the sputtering yield comes into it. And finally, if you want to substitute in the sticking coefficients, what you say is the redeposited at point i depends on the emitted from k, which depends on 1 minus the sticking coefficient. So we have this redeposited due to emission is very, very similar to the redeposited due to the sputtering. And there's a final term in a lot of these. And this is the biggest fudge factor I would say in the whole model, which is that ions coming down on the surface can sometimes enhance the deposition rate in a way that's not completely obvious. The ions may come down. They may supply enough energy to drive a chemical reaction that might take place at the surface. So we have a deposition flux. So we can have a deposition rate that is called ion enhanced or ion induced chemical reaction. So it's kind of a fudge factor. We have an additional flux and a term with a k sub i, where k sub i is a fitting parameter, times the flux of ions coming in at any given point. So you can have this extra ion induced deposition rate. Obviously, that's only important in systems with ionized type of deposition. OK, so those are all the different processes at the surface and the direct flux. And now, starting on slide 17, I want to give a couple of examples because it seems like there's a lot of different equations. But it turns out a lot of them simplify for most systems. So let's do an example of low pressure CVD. Remember, LPCVD is probably the most common deposition process in a fab, typically takes place in a furnace that's pumped down to low pressure, say 10 to 100 millitorr. Now, the wafers are usually stacked in a batch. People do 25 to 50 wafers at a time. It's in a resistively heated furnace. There are no ions. This is not a plasma system. So we can get rid of all ionized processes. So we can ignore sputtering because, again, we're going to say that only in the presence of ions do you have enough energy to sputter something off. So we get rid of the sputtering. Usually, long range surface diffusion is not that important. So we can also ignore the surface diffusion. So if in my bookkeeping I have all these different fluxes, most of them are all not present. There's only two that dominate at any point i. It's the emitted flux. Yes, because we have a finite sticking coefficient, and the redeposited flux. Yes, we need to take that into account, and the neutral direct flux. So really in this example I really only going to have three fluxes I need to worry. About all the rest of these we don't have to take into account in the simulation. So then we can write down those fluxes pretty simply, and the net flux here on slide 18. So at any point i on the surface, f super i, the net flux is again the flux coming in minus the flux going out. So it's the flux of directs, neutral species, the redeposited flux that comes from other areas of the surface, minus the flux that goes out. Well, those that don't stick, that's 1 minus the sticking coefficient times the direct flux of neutrals and the redeposited direct flux. OK? So these are the ones that come in, and these are the ones that go out. And there's a negative sign in between the two. So you can rearrange this a little bit and you can write it then as shown here, the net flux at any point i is the sticking coefficient s sub c times the direct flux at that point of neutrals plus a term that relates-- a geometric term, this gik times 1 minus s sub c. That's those that don't stick times the flux into some other point on the surface k. And then we just need to sum this up over all points k on the surface. Now, what you can do to simplify this a little bit, you can define f sub d to be the deposition flux at each point to be equal to the direct flux of neutrals plus the redeposited flux if you want to do that. And then you can write the deposition rate more simply as the sticking coefficient. So this is the dep rate, then sticking coefficient times the flux at any given point i divided by n. So we can then calculate the deposition rate at any point i on the surface. And we use a cosine theta to the n distribution for the incoming molecules. So it's a relatively simple formula can be used for LPCVD. And in fact, on slide 19, I just wanted to go through an example. This took directly from your text. What we're asked to do is calculate the dep rate using this LPCVD expression that we just derived, sticking coefficient times a flux divided by density, for silicon dioxide. So this is low temperature oxide. And we have a flat surface. So this is nice. There's no topography. We are told that the sticking coefficient is 0.3 and that the maximum unobstructed flux, so that's the maximum flux that's coming straight down, is equal to 3 times 10 to the 15th molecules per square centimeter per second. So that's a molecular flux. And we're given the density of the film that's deposited, 2.27 grams per cubic centimeter. So this is a relatively simple calculation. We want to get the deposition rate on a flat surface, therefore there is no shadowing effect. There's no redeposited flux because we're not coming off of a side wall of a trench. And each surface position is in a horizontal orientation. So we're only concerned about the direct flux. So f sub d, the deposited flux, is just equal everywhere to the maximum value, which is this 3 times 10 to the 15th molecules per square centimeter per second. And you can look up this equation 9.53. In fact, that's exactly what I just was showing down here for the dep rate. So we have that equation. The rate is just the sticking coefficient times the flux divided by n. We need to do a little bit of conversion here to get ourselves from grams per cubic centimeter into molecules. The way we do that is we take the density of SiO2 being 2.27 grams per cubic centimeter. And we multiply that by Avogadro's number divided by the number of grams per mole of SiO2. So we need to do a little conversion to convert this density of grams per cubic centimeter into a number of molecules per cubic centimeter for SiO2. So you get 2 times 10 to the 22 molecules of SiO2, those units per cubic centimeter. And then you just use the simple formula. The rate is just a sticking coefficient times the flux, the maximum flux over n. You plug-in the numbers you were given. And you get about something like 0.023 microns per minute. But now, if you had a deposition on a non-flat surface, if you had a trench or something like that on the side wall, for example, the local dep rate would be less. So this gives you a maximum, a rough idea of the maximum. So it just gives you-- we'll talk later about SPEEDIE simulations, more complex topographies. But it gives you a rough idea of how you can model this simple type of process. OK, let's go on to slide 20. And we'll give some more examples. Now we're going to do PVD. So we've already talked about physical vapor deposition systems. We said there could be DC or RF sputtering or evaporation. Typically, ions do not play a significant role in these kind of PVD systems most of the time. So the modeling of simple PVD is very much analogous to LPCVD. The parameters are very different though. So here's an example of a simple DC sputtering system, which we talked about a couple of lectures ago. And these are the processes that go on. So just like LPCVD, we can say a rate at any point on the surface is a sticking coefficient times the flux, the total deposited flux divided by the density. However, the key difference is that the sticking coefficient and the arrival angle distribution, cosine theta to the n, will be very different between LPCVD and PVD systems. And that's where the real differences tend to come about. There is one other type of system for PVD, though, where you do have to pay attention to a little more complex processes than you would in LPCVD. And that's shown on slide 21 when you have ionized PVD. We did talk about the case where we have systems that are more complex. This because ions and neutrals both play a role. Here's an example of an ionized PVD system. So you have sputtering of aluminum coming off this target. But then you have an additional-- in addition to that, you have a coil that goes around that ionizes the aluminum as it comes off. Now you have a large density of ions that are being created. And then these can be accelerated towards the substrate. So this is not that uncommon for metal deposition. So you have aluminum or maybe titanium ions that'll be present. Therefore, when you have ions, you do have to take into account sputtering and other processes that will not be present in LPCVD or the simpler PVD systems. So ionized PVD, inductively coupled ionized PVD, is a little bit more complicated to model, just more bookkeeping in the equations. Here's an example, in fact, on slide 22, if we do ionize PVD, of all the different fluxes that really do need to be taken into account. And you see we have yes on all of them, the direct neutrals, the direct ions. The only one we're not taking into account in this case because we're assuming the wafer is low enough temperature that we can ignore surface diffusion. But all these fluxes would need to be taken into account. And if you do that, shown here on slide 22, this is the example of the possible terms that could be included in your computer model. So you would say that the rate at any point i depends on the sticking coefficient times the direct maximum deposited flux, where fd includes the direct and the redeposited neutral fluxes. Plus f sub i includes the direct redeposit and ion induced flux. So there's an f sub i. And again, f sub i doesn't have a sticking coefficient associated. We say for ions, we include all of them, we say they just come down and stick. And then you have a minus those that are sputtered off. You have a sputter term plus a redeposited term. So this frd models the redeposition due to sputtering. So you have extra terms here divided by the density n. And so you have a positive term and a negative term, some of which depending on the point i and the surface and the angles involved may cancel each other out. OK so those are some basic bookkeeping models. I want to actually do some examples of the values of these parameters that are useful for specific systems. And so far, we've talked about two parameters that are most important, the n value, which is the exponent in the cosine theta to the n arrival angle distribution, and the sticking coefficient s sub c. Those are the two most critical in modeling simple systems. And there are a couple of different model systems shown here. On the top we're talking about sputter deposition, either standard or ionized, and evaporation. So these are PVD systems. In the bottom, we have the CVD systems. So let's look at PVD. Well, physical vapor deposition, it tends to have much more vertical arrival angle distribution. So the n value is big, right? Almost everything in PVD is coming straight down. Certainly in evaporation you have very low pressure, so you have line of sight. Even in sputtering you tend to have close very anisotropic. So for sputter deposition, depending, you can have n equals up to five maybe. If you're ionized you can get up into the range of 50 or so. But in any case, n is usually greater than 1 for PVD. In contrast, CVD systems tend to have pretty much isotropic arrival angle. PVD, again, takes place you're sputtering in 10 to the minus 5 torr, or 1 times 10 to the minus 3 torr, or something like that. CVD usually much higher, hundreds of millitorr, 0.1 torr or something like that. So you usually have higher pressure. You have a lot of gas phase collisions that randomize things and mostly neutrals. Usually, in CVD, you don't have many ions. So n is usually 1 in CVD. So the cosine theta to the n is usually fairly isotropic. What about PVD? Well, the sticking coefficient in PVD tends to be close to 1. There really isn't much surface chemistry involved. The aluminum or whatever it is that's being evaporated or sputtered usually comes down and pretty close to sticks. There isn't a whole lot. There can be. But most of the time the atoms arrive and stick. Very, very different, though, in the case of LPCVD-- look at LPCVD, the sticking coefficients, depending on what your chemistry using, if you're using a silicon source of silane, you might have a sticking coefficient of 0.5 or 0.3. For TEOS, we mentioned there's other ways of doing low temperature oxide, not just with silane, but there's this other silicon sources. Sticking coefficient can be much lower. LPCVD tungsten or polysilicon you have much lower sticking coefficients, so very different from PVD. That same chart is reproduced here again on slide 23. It's the exact same chart. I just wanted to reproduce it and talk a little bit more about the CVD thing. Because they have surface chemistry, sticking coefficient much, much less than 1, and things often evaporate before they react. What does this mean? Well, just by looking at these numbers, you know that CVD systems with their small sticking coefficients are going to tend to have more conformal deposition. If you want to cover a step and you want to cover it very conformally, what does that mean? That means the thickness along the flat part of the wafer is exactly equal to the thickness of the film along the trench wall is exactly equal in the bottom. It's just completely conformal. In that case, you probably want to use CVD. Now, that may not be what you want, but this is what we mean by conformal deposition, that equal thickness. And CVD tends to give you more conformality just because of this sticking coefficient argument. OK, now I want to show some actual examples of simulations to convince you of some of these things that we've been talking about. And we're going to vary the parameters n and s sub c in the model. So on slide 25, these are topography simulations that are taken from the simulator that I mentioned called SPEEDIE. Again, that's a topography simulator. You cannot simulate this type of topography in SUPREM-IV. SUPREM-IV doesn't do topography. So this is an example of you're doing a deposition using LPCVD of SiO2 and you originally have a trench that you have etched that looks like this. And each contour line-- it's a little bit hard to see. But each contour line represents the deposited topography at different time intervals. So each one might be after a 1 minute interval. So you get an idea of how the topography develops over time. In this particular example, we did a SPEEDIE simulation with a sticking coefficient of 1. Actually, the sticking coefficient of 1 is probably more typical of PVD. So this really should be PVD in your mind. And all we did mathematically is vary the arrival angle distribution, the n value. So this is for n equals 1, that's isotropic. Here's for n equals 3, somewhat anisotropic, n equals 10. So let's look at this. Even for isotropic arrival distribution, because of shadowing effects and things, conformal coverage is not achieved with the sticking coefficient of 1. That is, what do I mean by conformal coverage is not achieved? Well, the thickness of the film here at the surface on the flat portions is not equal to the thickness along the side wall. The side wall is thinner even for n equals 1. And so what this implies that if you have a sticking coefficient s sub c equal to 1, the geometry and line of sight issues are going to be very important. So for example, for PVD, you really need to take into account the geometry of what you're depositing into. Look at this one for n equals 10. You almost get no deposition on the side wall. You get quite a bit on the bottom. And notice this is for an aspect ratio that's pretty gentle, I mean, pretty not that sophisticated. This aspect ratio of 0.3, what does that mean? That means the trench is 3 times wider than it is tall. So it's only 1 micron tall, but it's 3 microns wide. And you're already seeing these nonconformallity types of issues. So this is what you'd expect from PVD with these parameters. Just to remind you, on page 26 what we're talking about, the n equals 1 the flux as a function of angle looks like this. N equals 3 looks something-- it cuts out after about 60 degrees. And the n equals 15 cuts out after about 30 degrees. So this is close to n equals 15. So when I was getting more than about 30 degrees, so from a normal you can imagine going on to this surface, there is not a whole lot of flux. And that's the problem. That's why we're getting such low-- with the sticking coefficient of 1, you're getting such low deposition along those vertical walls. Let's make it a little more difficult even let's take an aspect ratio. We're going to take those same parameters, so the same parameters that we just saw. Sticking coefficient equals 1 here on slide 27, but now I've increased the aspect ratio from 0.3 up to 1.3 or 1.25. So now the trench is 1.25 times tall compared to what it is wide. So it's much taller than it is wide, just the opposite of what I had before. Now, what you do see again is less deposition on the side walls. Here, this is A here on the left is for isotropic arrival, n equals 1. And b is for anisotropic, n equals 10. So let's look at the n equals 1. Well, what's happening? You see if for isotropic arrival angle, you're not getting much material down at the bottom of that narrow trench. Because, again, it's reasonably isotropic arrival, you're getting material coming deposited along here and you're sort of developing these lobes up at the top, which if you had a narrow enough trench, you can imagine these might cut off and you end up with a big void. If you're more anisotropic, you're coming down straight. So you do get quite a bit at the bottom of the trench. You get quite a bit on the flats. But you get nothing or very little on the side walls. So as you go to narrower and narrower trenches, you can imagine you might end up not being able to fill them at all. You might end up with voiding if you have a sticking coefficient of 1 is really what this boils down to for high aspect ratio trenches. So if you want to avoid that, let's say the purpose of your deposition is to fill this trench. You haven't done a very good job in this case. And you're not going to do a good job in this case at all. You're going to pinch it off. What do we need to do? Well, we need to lower the sticking coefficient. We need to have not such a line of sight distribution. We need for atoms to come down, come in here, and not stick, and move around so they can eventually cover these side walls. And that's exactly what is shown on slide 28 in the SPEEDIE simulations. Here's an example of now I've got that same trench, that same high aspect ratio, 1.25. But on the left here, A, I have the sticking coefficient of 0.1. So only 1 in 10 atoms sticks. The other 9, when they hit a point, they go somewhere else and they continue to bounce around until they finally stick. Here's 1 in 100 sticking. so, again, this is with n equals 1. So it's isotropic arrival, but with different values of the sticking coefficient. So sticking coefficient is the knob you want to turn if you want to fill a trench. Changing the n value helped, but not a whole lot. It was really lowering the sticking coefficient. You get much more conformal coverage. So you get on the sidewall just about what you had. So this is typical on A, this view of things, the parameter values. n equals 1 and sticking coefficient of 0.1 is very typical of LTL, so low temperature oxide or CVD SiO2. b with a sticking coefficient of 1 and 100 is more typical of tungsten CVD. So we need to reduce the sticking coefficient if you need more conformal deposition. How about the sidewall angle? So if I even go back here, I look at this, I didn't do a perfect job in this case of filling this trench. It look like it's going to pinch off and form a void before I get the whole thing filled. Even here it doesn't look all necessarily that perfect. So what can we do? You only have so much latitude, you're sticking coefficient. You can change the pressure and do things in your reactor. But you only have so much latitude. Well, what you do is you change the topography that you're starting with. So these examples are results of SPEEDIE simulations where you're doing LPCVD into a trench. And we've changed the sidewall angle. Here, the trench has a 90 degree angle, so the side walls go straight down. Here, it's opened up slightly by 5 degrees. So this is an 85 degree angle. And here you open up a little bit more, 80 degree angle. This has a sticking coefficient of 0.2 and an n of 1. So this is very typical of low temperature oxide deposition, for example, into a trench. So if you decrease the angle from 90 to 80 degrees, you greatly improve the filling. See at 90 degrees you're going to pinch off and you're going to get a void. At 80 degrees you're not going to. So it's very common in etching, when you go to etch, to try to etch a contact hole in a via, you never want to etch it straight down. You always want to have a slight angle, say make it 80 degrees to slope this so that you can get much better filling of the hole. Or if you're trying to make-- you want metal lines to go over a step on your wafer, you never want to make the step look like this. You usually want to have it more sloped. So you somehow slope the step when you etch it, or you put in side wall spacers, or do something. So the general idea, if you want good step coverage, don't make such abrupt steps. If you're doing vias, you can just decrease your angle from 90 to 80 and you get much better filling of the trench. OK, how about there are a little bit more complicated cases? I've just shown very simple cases, PVD without ions or LPCVD without ions. How about the case where you do have an ion flux? And there are a couple of cases like that. There is what's called HDP, high density plasma CVD. This has a very high ion flux, could be three or four orders of magnitude higher than what you have in ordinary CVD. So how does this work? Well, think we mentioned this at one point. This particular one has a microwave source. What we have is we have a plasma up here. And there's a microwave supply that's creating those ions. But then it's created remotely. So this ionized plasma has a very high density, but it's created remotely, not directly over the wafer. And then you have an RF bias and you extract those ions towards the wafer. So the nice thing about this is that you can get a very high density without having huge energies, huge voltages, to extract them to the wafer so you don't get so much damage. And this high density plasma can give you a very high CVD dep rate, even down to a room temperature with relatively low pressures. So the key is you have a separate RF bias applied to the substrate. That controls the angular dependence of what comes down and of ion sputtering during deposition. So we have a direct ion flux coming down like this on the surface. And we have ions that come down and sputter off. And now here's where our resputter flux becomes important for these high density plasma systems. And the thing that we need to realize is because of the angular dependence of sputtering, sputtering occurs preferentially on a slope surface rather than a surface that's either vertical or horizontal. Because, remember, that sputtering rate sort of peaked at around 60 degrees. So if you have a sloped surface, you get preferential sputtering. So you can actually use these high density plasma systems to do planarization. Now, all along I've been talking about conformal deposition. You may not want to get conformal deposition. You may want planarized. So what do I mean by a planarized deposition? Well, if you start out-- let's say your surface topography looks like this. You have a metal line or something and you want to cover it with oxide, but you want it to be planar when you're done. You'd like the surface to look like that after deposition, completely flat. That's just the opposite of conformal. If it were conformal, right, it would have looked like this. It would reproduce that big hump. But if you're trying to do planarization during deposition without using CMP, you'd love a process that would deposit like this. Well, how can you do that? Well, there are some processes that are self planarizing during deposition. And here's an here's an example of such a process. You need these ionized techniques, which in addition to depositing, are doing sputtering preferentially. So here's an example on slide 31 of two SPEEDIE simulations. On the left, I have low pressure CVD, regular low pressure CVD. So I have a line like this and I'm trying to cover it with oxide. And you see you started out with a bump and you end up with a bump. There is no planarization. It's perfectly conforming. In b, we've simulated this high density plasma CVD deposition, and it has a directed ion flux. And the key is it has this angle dependent sputtering, which you end up with a little bump but much more planarized topography after deposition. So if you had to do CMP to planarize, on this one you'd have to CMP quite a bit. On this one, you'd probably have to CMP only a little bit, just a small amount. And that's very desirable because CMP is a tricky process. It introduces non-uniformity and it's fairly expensive. So if you can do self planarizing deposition with these fancy ionized deposition systems, you're much better off. And I'll actually show some real examples of that. So besides depositing over a line, you can also use this high density plasma CVD. It's very good at filling open spaces and reducing void formation. So here's an example here, again, a SPEEDIE simulation on the left. I had a trench with a relatively high aspect ratio. This was that 1.25 aspect ratio. By LPCVD, you see it pinches off and you end up with a void. And in order to do this, we used a sticking coefficient of 0.2 and an isotropic arrival angle. On the one on the left-- on the right, I'm sorry-- is high density plasma deposition in the trench. And you get much better filling. You don't see the void. Well, why does it do this? Well, high density plasma gives you better filling because you combine this highly directed ion flux. So a lot of the ions are coming straight down, and that helps. They get into the trench. So they're towards the surface with this angle dependent sputtering. So the angular dependent sputtering tends to etch off any of these overhangs. As they form they get etched off by the sputtering preferentially. So overhangs that develop get sputtered away, and you end up being able to fill the trench pretty well without a void. Often, you do not want voids in the film. You don't want air spaces in your chips. So this is a way to do it is that high density plasma. Here's an example on slide 33 just to show you-- instead of just showing SPEEDIE simulations, actual data. These are scanning electron micrograph images of HDP oxide deposition. And on the left, this was a metal line you can see. And we we're trying to deposit oxide over that. And again, if it had been conformal, you'd have a big hump in your oxide. But with high density plasma CVD, which is not conformal, all you end up with pretty planar oxide. You still have a little bit of a bump there, a very small little bump, that you would have to planarize away. But the amount of CMP you'd have to do on that would be very minimal. And again, so that's the case of depositing over a line. If you're depositing oxide in a trench, so here's an example of a trench. You have two metal lines, one on the right, one on the left. They're separated by half a micron or so. You want to completely fill this without voids. You can see you've gotten excellent filling. You did get some of these strange, you know, topographies because of the ion effects. But that would have to be CMPed off. But the key is you were able to fill a very small space without inducing voids. And that would be a problem if you're trying to use, say, an LPCVD system. So you can imagine looking at these topographies, simulating, and then going back in SPEEDIE and varying your models to try to get something that looks like what you actually got from SCM. And that's what exactly what people do in topography simulation. Again, it's not first principles. You vary all these different parameters. The sputtering yield is a function of angle, the n value, which is the cosine theta to the n distribution, the sticking coefficient. You vary all those until you get something in your simulation that looks somewhat close to what you actually deposited. Slide 34 shows the case-- we haven't had much chance to talk about it, but this whole idea of surface diffusion. And this becomes important only when you're usually doing high temperature PVD, say for aluminum. That's a practical example that people sometimes do. Again, all of these issues tend to arise when you have high aspect ratios on the wafer. So you have a very high aspect ratio trench like this and you're trying to fill that with aluminum. And so on A on the left is a high aspect ratio trench using PVD. So we have a sticking coefficient of 1, n equals 4. So this could be a sputtered situation. And you have very poor filling. In fact, you don't have much at all aluminum on the side walls if you weren't able to fill the trench well at all. In B, this is again, now this is way of high temperature PVD where we've raised the surface of the wafer up to about 400 degrees, much better filling, not completely, but much better. And finally, if you do to a deposition temperature of 550, and you can't go to hot. Aluminum melts at six something. You'd be in trouble. But at 550, you get a lot of surface diffusion. And you get a lot of reflow of the aluminum, essentially, and you completely filled the trench, so much better much smoother topography. So that's an example. When you get to very high aspect ratios, you cannot rely just simply on sputtering. And so you need to add something to the system. You add the surface diffusion component by raising the wafer temperature. So then you need a special system where the sputtering system is designed so it can heat the wafer up to reasonably high temperatures at which the species will surface diffuse. OK, so on slide 35 I just want to summarize. As I mentioned, this is the last lecture on chapter 9. We talked about important issues for thin film deposition. We said there are two different properties, the physical properties and the chemical properties of the film. The coverage of topography, for example, this conformal deposition and step coverage we said were very important. There are two main techniques you need to keep in mind, chemical vapor and physical vapor deposition. If you have simple models, the dep rate is limited by either the surface reaction rate-- that's usually at low temperatures-- or the mass transfer through the gas phase boundary layer. That's usually at high temperatures. But you do have to take into account shadowing and surface features to model the actual coverage. For PVD, these processes, again, their names-- it's very physical rather than chemical. The arrival angle distribution at the wafer surface is important, that cosine theta to the n term. In a lot of these techniques, the species arrive more vertical to the wafer surface, and the sticking coefficient tends to be one for PVD. So PVD has a lot of shadowing issues that you need to keep in mind. We model this by having a sticking coefficient s sub c. And if you have a low sticking coefficient, say much less than 0.01 or something like that or 0.001, and you have an isotropic arrival distribution, you'll get much better filling of holes. But there are new techniques. But this only works to a certain extent this filling. So the latest techniques use these high density plasma and these ionized sputtering techniques, which have simultaneously sputtering going on at the same time as deposition. And this can be used to get good gap filling and planarization. So these are the most modern techniques people use today. There's a lot of things we didn't get to talk about in chapter 9 in this course. 6774 is about front end processing, but we give the lion's share of the time to oxidation diffusion and ion implantation. Most of the lectures are on that. If you really want to know a lot more, if your thesis research is going to be on thin films, for example, we didn't cover a lot of things. We didn't talk at all about the grain structure, the density, the stress in the film. None of these things did we have time to cover and how to model these properties. There are courses that are more thin film oriented courses, like in the material science department. And if you really want to know about thin films beyond what's in chapter 9, I would recommend you look towards some of those classes. But I just wanted to make it clear that this is sort of the minimum amount of information you need to know for front end processing as far as thin film dep and epi goes. OK, I think that's about all I have in this for today's lecture. As I mentioned at the beginning, if you came in late, hopefully you're working on homework number 5. It's due Thursday. And I did bring the clipboard. My assistant has typed up her version of what she thinks you're going to be either talking about or writing up. You can check. That I've approved all of them. There's only one person who doesn't have anything listed there. And if you're giving an oral report, what I'm going to do this week is start scheduling for the last week of classes so you know which day you have to give your oral report. OK, good, well, I'll look forward to seeing you on Thursday then. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 14_Transient_Enhanced_Diffusion_TED_1_Model_311_Defects_and_TED_Introduction.txt | JUDY HOYT: Go ahead and get started with today's class. I've got a couple of announcements. There's one handout for today, which is in the back. Hopefully you all have a copy of that. And I've brought with me the clipboard with the project signup sheet. So I'd like you to put down your name on one of the lines here. And the most important thing today is whether you want to give a written or an oral. Written would be at 20-page written report. The presentation would be a 15 to 20 minute presentation to the class. And all of that is described in the handout I gave to you about a week ago, which hopefully you have. If you don't have a copy of it, it's on the web or it's here on this clipboard. So I'd like to know today or Thursday at the latest whether you want to do a written report or an oral report. Even if you don't know your topic. This will help me in scheduling your oral presentations. I have to figure out when at the end of the term, which class periods I'm going to dedicate to oral presentations. And then if you know your topic, that'd be great. Go ahead and write that in. The last column on this signup sheet says whether it's approved or not. And I'll look through this over the next few weeks. And as time goes on, I'll look at your topics. If I have a question. I'll talk to you about it. And if I don't, I'll just go ahead and approve it. So when it's checked off, then your topic is approved. So if you can fill that information out, the sooner the better, on the written versus oral, and as well as on your topic. If you have any questions about topics, feel free to ask me. So I'm going to start passing that around. One of the things, homework. Homework number 3 is being graded. Hopefully we'll have that back to you soon. Homework number 4 went out last time. Hopefully you got a copy of that. If you didn't, it's all posted on the web. Everything is posted on the website. This is where we are on the class schedule. What I'm looking at here is this Excel file, which just sort of shows that we're at lecture 14. We're going to talk today about transient enhanced diffusion from Chapter 8. And you've got a couple of more lectures. The next homework due date is homework number 4, which is due a week from today, election day, November 2. So let's go ahead and start with today's lecture. The notes are handout number 24, and this is the third lecture on chapter number eight. Chapter 8, hopefully you've been here reading through. Chapter 8 at this point, chapter 8 is all about ion implantation. But this lecture and the next lecture are about transit enhanced diffusion. So let me just remind you what we talked about last time. We talked about the physics of nuclear stopping and electronic stopping, or electronic energy loss processes. We said that if you have a heavy ion-- say if you're implanting antimony-- and heavy and lighter always relative to the substrate. So antimony and arsenic, for example, are very heavy compared to silicon, then nuclear stopping tends to dominate over their entire path as they go into the silicon substrate and as they come to rest. Nuclear stopping is dominant. If you have a heavy or a light ion, then nuclear stop stopping dominates towards the end of path, the end of range. So when the energy gets low enough. So as we saw last time, the nuclear stopping power goes down as energy increases, and the electronic stopping power goes up. So only at very low energies do you have a lot of nuclear scattering for a light ion. It's the nuclear stopping, not the electronic. It's the nuclear stopping, the collisions, that contribute to crystal damage clearly. So the ions are coming in, and in a nuclear event, you have a billiard ball collision, and you're knocking the silicon off its lattice site. And you're creating, in that process, silicon interstitials and vacancies. And these results in something which we call the collision cascade. We talked a little bit to how people have calculated damage profiles. So they calculate the amount of energy deposited into these nuclear stopping processes as a function of depth. And the peak damage-- so the most amount of damage you do-- the profile tends to peak near the projected range of the primary ion. Maybe just slightly shorter or shallower than the projected range RP. So let's say you're doing an ion implant of arsenic, and you can easily calculate, using one of the theories, the arsenic profile. You can use the rain statistics for the arsenic, whatever. You know where the RP of the arsenic is, and you can figure out where the most of the damage is going to be done. It'll be just about 90% or so. It will be the peak of that damage profile. For a heavy ion like arsenic or antimony and silicon, the damage is more stable. So it doesn't tend to anneal out at room temperature. It's relatively easy to form an amorphous layer with a heavy ion at room temperature. And this can then be regrown. We talked about a process called solid phase epitaxy to regrow that amorphous layer. And so this is a relatively efficient way to do dopant activation. I'll say a little bit more about dopant activation in this lecture. And finally, we mentioned last time, we just started this topic, we said that the excess interstitials that are created by this implant, all these nuclear collisions, they can cluster into a particular type of defect called a 311 defect. Later, the 311s dissolve and they give off interstitials. And these interstitials then are what determine the kinetics of transient enhanced diffusion of boron and other dopants. So that's just a review of what we talked about last time. What I want to cover this time, I have a few slides in this lecture on dopant activation just to go through that a little bit more carefully. And then most of the lecture is going to be talking about transient enhanced diffusion this, TED effect and how it's modeled. So let's go on to slide number 2. Well I think I showed this last time, this cross-section PEM. We're going to talk about this case on slide number two, when the dose and the mass are high enough to amortize the silicon. And these are cross-section transmission electron micrograph images of a sample that has been ion implanted and amorphized. And then it is being regrown by putting it in a furnace at 525 degrees C. The initial implant was 200 keV of antimony. And look at the antimony dose. Quite high, 610 to the 15th per square centimeter. That's a high dose. And so you can see-- this should say zero minutes, I'm not sure what happened to the zero-- but anyway, this means when you're close to starting up what you have is an amorphous layer from the surface down to some depth. And in fact here's the scale bar. This is 100 nanometers from here to here. So that's maybe 300 nanometers, 3,000 angstroms deep, something like that. It's all amorphized. The crystal structure is completely destroyed. You have an amorphous solid. Now after 10 minutes, what has happened? Well the amorphous crystal interface has moved up. So we have regrowth. We have solid phase epitaxy, the layer by layer process. This epitaxy process is not that different from growth from the melt. The only difference here is we're talking about, we're getting a phase transition, or we're getting a transition from in the solid phase from amorphous to solid phase single crystal. And that's happening in a layer by layer fashion here. After 10 minutes, I've grown up a little distance. 15 minutes, you can see the amorphous crystal interface has progressed. And in fact, this is a linear growth rate, you'll find. On the right hand side all the way on the right at 20 minutes, there's only a small amorphous layer remaining. And if we were to go longer, the amorphous layer would entirely regrow and you would have single crystal silicon completely in depth from the surface on down. Then the only thing you're left with, you still have single crystal material, is down just below the original amorphous crystal interface, you are left with what's called these end of rage damage. It's a series of these dislocation loops and other sort of extended defects that don't anneal out. End of range damage, in general, unless she were to melt the sample or do something, it's something we most of the time, we end up having to live with to a certain extent. It's very hard to get rid of. But the amorphous layer is gone, and we have single crystal silicon. So it's just a reminder when we're talking about amorphization and a process for restoring the crystal to its single crystal nature. So let's go on to slide 3. I wanted to just remind us about dopant activation. And this particularly applies to boron and silicon. It's much more difficult to anneal a silicon sample. That is, when I say difficult to anneal, really what we mean is it's difficult to activate, to electrically activate. So to get when-- what do we mean by electrically activate? Well, for every ion I implant, I like to have it go into a substitutional place. And if it's a donor, I like to donate an electron so I get a free electron. If it's an acceptor, I'd like to get a free hole. That's the whole purpose of doing the ion implant. So if I implant a dose of 10 to the 15th of arsenic or boron, I'd like to get 10 to the 15th per square centimeter free electrons or free holes. That's what I mean by activating. This activation is more difficult, ironically, when you've only partially damaged the silicon. Particularly intermediate doses are difficult. So let's just take the range of doses. I think we talked a little bit about this last time. If we have a very low dose, 10 to the 12th atoms per square centimeter, not much damage is done, and it's not too hard to anneal the sample, get the crystal structure restored to more of a perfect state and to activate all the dopants and substitutional sites. A very high dose, you create amorphous layer like we just saw on the previous slide. And it's relatively easy to anneal out the amorphous layer by solid-phase epi. Solid-phase epi can take place at 500 or 600 degrees. So you don't have to go very hot. Typically you would go a little higher than that, maybe after this SPE regrowth step you might go to 800 or 900 to get a little better dopant activation. But that's not so hard. The hard one is the last one listed here. These are the intermediate doses. And this is a dose that is high enough to do some damage, but it's not high enough to create complete amorphous layer. So you cannot have solid phase layer by layer regrowth. So it's right in the middle that you have a problem. And this leads to a lot of complex behavior where secondary defects, other types of defects besides just interstitials and vacancies, can form. And particularly for boron, which is a light ion, because of the nature of the type of damage it does, this intermediate dose range is a real problem. And this complicated behavior can occur over a wide range of boron doses that are used in practice in CMOS fabrication. And it's because of the nature of the damage created by boron is different. It's a light ion. So it's not doing that much very effective nuclear damage, at least until you get towards its end of range. So you'll hear people say, oh, boron is a lot harder to activate than arsenic. Indeed it is. It's harder to repair the damage from a boron implant in silicon than from an arsenic implant in silicon. What do people do about this? Well, there are a couple of approaches I think we mentioned last time. Boron is light. OK, that's a problem. In some sense it's doing damage, but not enough damage. So within the dose range that you want to implant it. Sometimes what people do is they don't implant pure boron. They implant a molecule, BF2. So these two extra fluorines add quite a bit of extra mass to the species that your ion implanting. So it's not just boron 11, it's boron with two fluorine atoms attached. And this is the ion that you ionize and the ion implanter, and that you impart energy and you put into the silicon. So when the BF2 molecule hits the silicon, what happens? Well, it dissociates, most likely. It has so much energy, and the boron goes along and the fluorine goes in. And they both impart energy though, they both can do some kind of nuclear damage. So we can kind of treat this, we can say that the energy is apportioned, a certain amount to the boron-- in ratio to its z number-- and a certain fraction of the energy goes to the fluorine. And having the extra fluorine in there is a way to amorphize the sample. So BF2, sometimes people use to try to improve the activation. It's also a way of getting, effectively, a lower energy implant. Because if you have a certain energy imparted to BF2, the boron only shares a certain fraction of that when it actually hits the sample. The other thing people do is they do what's called pre-amorphization. Pre-what? That means prior to implanting the boron, they modify the sample with another species. Typically you would not want to use an anti-dopant, because then you're compensating the boron. So typically you use either silicon or germanium, which are not electrically active per se. Right? Silicon is the same as silicon, so it's not going to produce any anti-dopant effects. Germanium is also in column four, it turns out it does not produce a doping effect. That is, it does not add p or n type character to the substrate. And it's heavy, so it's easy to amorphize. So you'll see people doing prior to a shallow implant, they may implant silicon and liquid nitrogen or germanium at room temperature, create an amorphous layer. Into that amorphous layer, implant the boron, and then do a relatively low temperature activation to try to activate the boron and get a little bit of a better annealing behavior. Pre-amorphization also has another advantage if you want to think about it. Remember when we were talking about ion implantation a few lectures ago? We were talking about ion channeling, and the fact that ions can be knocked into these channels because you have a perfect crystal that can go a long ways before they stop. And this creates these long tails, it makes it very hard to get an abrupt, shallow junction. Well, if you amorphize the top 3,000 angstroms prior to putting the boron in, that eliminates the possibility of ion channeling in those top 3,000 angstroms. There is no crystal. It's not single crystal, it's completely random. So pre-amorphization has two benefits. You can get a little better activation of the boron, and it eliminates ion channeling, so you can do shallower junctions. It costs more money, it takes time, it does create end of range damage, so there are tradeoffs in using pre-amorphization as a way to do boron implants. So that's mainly comments I wanted to make about dopant activation. One more comment before I leave slide number 3, because we probably have a homework problem coming up in homework 5 about activation. When you're using Supreme, Supreme can model dopant activation in a very crude way. If you do an ion implant in Supreme-- you let's say you implant arsenic to a certain concentration or dose-- and then you anneal it at just about any temperature, the default assumption is that everything is activated almost immediately within a few microseconds or something. So Supreme doesn't really have any good kinetics, any time dependence of the activation. So you do a one minute anneal at 800, it assumes you're getting the same amount of activation as a 30-minute initial at 800. It doesn't have kinetics to activation built in. But it can plot for you. It will plot the active carrier concentration. And it will clip it off at the solubility limit. So if you implant arsenic to a peak of 10 to the 21, let's say, a huge peak per cubic centimeter, and then you tell it, I want to anneal it at 1,000 degrees for ten seconds, fine. It'll give you the active arsenic. It'll take it up to at 1,000 degrees, whatever the solubility limit. Is say it's 3e20, and just clip off the profile and go down from there. So it imposes a very kindergarten-like, sort of elementary activation model. But it can still be useful if you're trying to figure out what your sheet resistance should be. But it doesn't have a lot of these kinetics built in. People need to do that more empirically. Do an implant, do an anneal at various temperatures, and see what your activation ratio is. OK, so let's go on to slide 4. And this is, again, review from last time, but I think it's important to review this before we talk about transient enhanced diffusion. Last time we talked about this model with a funny name called the plus 1 model of implant damage, or plus-one model for residual damage. It was published by Giles back in 1991. And the idea was this. Or what the model says is most of the recoiled silicon interstitials and vacancies recombine very rapidly in the first few milliseconds, or hundredths of a second, during the implant, or else just when you just put it into the anneal. So you create a huge number of these. But most of the interstitials find a vacancy, and they find a place to go back to. How about the remaining? It's only the net remaining that we care about. So it's the distribution of the remaining interstitials, or the recoils, shows a net-- tends to have a net excess of vacancies near the surface and a net excess of interstitials towards the bulk. So in fact, if you plot on this plot-- this is a plot from Giles-- concentration versus step, this is the total number of interstitials up here, in the 10 to the 21 range, or high 10 to the 20s for this phosphorus implant. And the total vacancies, you can't even distinguish the two curves. They're practically-- they're really on top of each other on the log scale. When you subtract them from each other, what you find-- so you subtract interstitials from the vacancies-- that the net vacancy concentration looks something like this. That's what's shown by this little profile here. So near the surface, we have some extra vacancies of a concentration, in this example, of about 10 to the 17th. In the bulk, a little bit deeper, we have net interstitials. But their total numbers is orders of magnitude lower, because a lot of them have recombined. And look at their order of magnitude. What is their height? Well, lo and behold, the height of these net interstitials, its concentration is very close to the concentration of the phosphorus implant-- very, very close-- so of order of magnitude. So to first order, what Giles said, is that all of the original implant damage recombines and leaves behind only one excess interstitial for every dopant atom you ion implanted. So if I ion implanted, in this case, 10 to the 13th phosphorus, then he would say that the number of interstitials left behind is 10 to the 13th per square centimeter-- because every phosphorus atom eventually finds a home on a silicon lattice. And those 10 to the 13th silicon atoms must then be interstitials. So that's the most elementary model, so to speak, for the amount of damage that's produced. And it's easy to do. You just take the dose, and that's the answer more or less. Actually, if we want to be a little more sophisticated, people since that time have come up with what they call a plus-n model, where n is of order 1 but not exactly equal to 1. And this is Pelaz et al. from the MRS meeting in 1997-- came up with this sort of plus-n model. What they were saying is that the multiplier on the dose for the original ion implant dose, the multiplier may not be exactly 1. It may be 2. It may be 3-- something of that order. So this plus 1 approximation, which says that the number of interstitials created exactly equals the dose, it's reasonably good, but it's not necessarily perfect. And it may be particularly not as good for a heavy ion-- a low energy, or a low dose, particularly for heavy ions. The reason we don't think it's so great-- it's not as perfect-- is because there's a lot of recoils. If I implant arsenic, it's a very heavy ion. It comes in, it can impart a fair amount of energy to a silicon that's originally at rest. And then that creates more recoils. So you can have a large population of recoils for each ion relative to the ion population. And so that means they can essentially do a little more damage. So a simple approximation using a plus-n factor, which is a function of the ion species, the energy, and the dose. And here's an example of a plot from that paper. This is a plot of the n factor. So what do we mean by the n factor? That's the multiplier that you need to multiply the dose of the primary ion. And that will tell you the number, then the dose of excess interstitials. And so let's say if you're a boron. So let's look at this solid line. And we're doing boron somewhere around 5 to 10 Kev. Well, the n factor is very close to 1. So if I implant 10 to the 13th per square centimeter, I get 10 to the 13th per square centimeter interstitials excess created. Look at BF2, is a little heavier. And in that same range, the n factor is 1.5. So if I implant 1 times 10 to the 13th BF2 per square, then I would assume that the number of interstitials is 1.5 times 10 to the 13th in this model. And look at arsenic. It's actually almost up to a factor of 3 to 3.5. So somewhere between 1 and 3, and particularly at low energies where there's a lot of nuclear stopping, the n factor tends to increase. At high energies, they all approach one. And particularly for heavy ions, the n factor can be greater. Now the reason we're emphasizing this, you need to know the dose of interstitials. You need to understand that, because it's those excess interstitials that are going to be responsible for transient-m enhanced diffusion. So that's why people pay a lot of attention to these damage models. OK, so that's the next topic. We go on to Slide Number 6 on TED. I think I showed this last time as a way of finishing up. This is kind of an anomalous plot. It's a concentration as a function of depth. These are boron profiles. And what's anomalous about it is if you look at the blue profile here, it's a 10 second at 1,000 degrees at a very high temperature, as opposed to two minutes at 800. The 1,000-degree profile-- this is after being ion implanted-- the 1,000-degree profile is actually shallower-- less motion at a higher temperature. So this is very much non Fickian. This is very difficult to explain from a simple single diffusivity, which is exponentially activated. Because we know, if you look in your text for the intrinsic diffusion of boron, that at 1,000 degrees it's like two or three orders of magnitude higher. And yet here this cannot be normal regular intrinsic diffusion, because you're getting a lot less diffusion at 1,000. And so this is because of TED. And it'll turn out that-- we'll talk about in the modeling in this lecture why it is that at low temperature, we actually can get higher amounts of transient-enhanced diffusion-- higher amounts of this damage-induced diffusion than at high temperatures. People didn't see this for many years. This TED was first started to be observed in the mid '80s, and then by the early '90s it became pretty prevalent. And why not? Why didn't people see it? Well, for years annealed devices were very large. They were large in lateral dimension. They were large in vertical dimension. So annyas took place at reasonably high temperature, and they're very long. And so the TED effect-- the damage effects were completely masked by normal diffusion. Because TED, what does that mean? Transient. It only lasts for a limited period of time-- a short period. So if you put a wafer in a furnace at 1,000 degrees, or 800 for four or five hours-- a couple of hours-- you're not going to see it. It's only when people started trying to get very little amount of [INAUDIBLE],, and they cut back on the time-- 10 seconds at 1,000, or a few minutes-- so development of rapid thermal annealing techniques where the anomalous diffusion became obvious. Because the thermal budgets today are so small that, in fact, TEDs were actually the dominant effect that determines many junction depths, not ordinary diffusion that we taught you about earlier, a couple lectures ago. And even sometimes not even concentration-dependent diffusion-- doesn't really matter that much. A lot of it is dominated by the transient diffusion effect, at least for certain dopants-- particularly boron. So the type of enhancement we're talking about here, it may only last for a short period, but we're talking about maybe 20,000 times the diffusivity-- the ordinary diffusivity at 700, or maybe even 400 times the diffusivity at 1,000. So these are not negligible. These are huge enhancements. And there was a lot of work in the early days spent at trying to figure out what this could possibly be due to. So let's go on to Slide 7. The basic model for TED, what we assume is that all the implant damage recombines very quickly except for the one interstitial generated per dopant atom. So the easy way we think about it is just use the Giles plus 1 model, or plus n, if you want to be really accurate. But for now, we'll just do plus 1. And here's an example of how TED effects can be seen. And they're actually quite non-local. And that's what's interesting about them too. So here is an experiment that was done. This is a concentration as a function of depth-- so maybe a SIMS profile. And the initial profile before any annealing looked like this. So you have boron, which there's very low boron at the surface. And then you have a boron marker layer. So it has some depth. At some depth, there's a boron layer that's doped about 1 E18. This marker layer can be put in a variety of ways-- could be put in by epitaxial growth-- whatever. It's a region that's boron doped. We call it a marker layer. It could be the base of a bipolar transistor for all we know. And it looks like a box initially. And then you go and you ion implant arsenic above it. Now notice the arsenic never touches the boron. So I'm not ion implanting arsenic deep into the wafer, it's just in the near surface region. Now, this arsenic is a reasonably high dose. You can see the concentration is quite high. This arsenic does a lot of implant-- it creates a lot of implant damage. But the damage, of course, is confined to the near surface region. A lot of the damage will anneal. Because if it's amorphous, you get SPE, right? And the fact, this little dashed line was supposed to represent the amorphous crystal and interface. So everything to the left is amorphized. But we know that there is a certain amount of damage at the end of range. There are excess interstitials. And so these excess interstitials are then going to diffuse in. They're going to form these 311 defects. They're going to diffuse in, and then they're going to dramatically enhance the diffusion coefficient of the boron below. So don't think necessarily of the implant damage effect as being, oh, the implant damage and the enhanced diffusion occurs only where the implant is-- just like oxidation. Remember, in OED, you can have a process happening at the surface, and you can have interstitials injected and influencing diffusion going on much deeper. TED can be like that. The difference is that Ted is transient. It only lasts for a certain period, until the damage goes away. One question I just had. So in this schematic illustration, see the green profile after diffusion. Look at the amount of diffusion. There was a lot of TED of the boron and not much for the arsenic. Does anybody have any ideas, knowing what we know about the diffusion mechanisms of boron and arsenic and other things, why you would expect a lot more damage-enhanced diffusion for boron than you would for arsenic, as shown in this example? Anybody have any ideas? [AI Auditory hallucination] JUDY HOYT: What do we know about f sub i? What do we know about the diffusion mechanism of boron? Is it mostly interstitials, mostly vacancies, or both? Does anybody remember f sub i for boron? Well, let's take a vote. How many people think it's 1, f sub i? All right. Yeah, it's 1. So what that means is f sub i-- it means boron-diffusion mechanism is almost entirely by interstitials, as opposed to diffusing with vacancies, OK? So excess interstitials-- boron needs interstitials around in order to move. That's its mechanism of moving. How do we know that? We know that from a whole series of oxidation-enhanced diffusion experiments and nitrodation-retarded diffusion experiments that people have done-- injecting interstitials, injecting vacancies, and they see what happens to boron. So in TED modeling what we're going to say is that this implant damage is going to inject into the silicon a lot of excess interstitials. Boron diffuses by interstitials. And its diffusion coefficient is going to be enhanced, or pumped up by these excess interstitials. Arsenic-- what about the f sub i for arsenic, what do we think that is? Is it 1? .5? Anybody vote for a half? Yeah, it's on the order of half, roughly-- maybe 60%-- depends on who you talk to. People who do this for a living will argue their career on it. But you know, I think it's close to 60%. Anyway, it's partly enhanced by interstitials. Rather, it's partly diffusing with interstitials, partly with vacancies. So you don't expect-- it doesn't have quite as much dependence on interstitials as does boron. And also most of the arsenic profile here-- it's a little tricky question-- but most of it, remember, was to the left of the dashed line. So most of it occurred in the region where there was SPE. So that's going to regrow very quickly and not cause too much diffusion. So it'll turn out, as you'll see in most examples-- not all-- arsenic certainly has some TED, but it's much more prevalent for boron. In fact, I want to show you some actual examples. That was a cartoon picture. But it's based on things that people observed in real life. Here's an example I want to show you of some data in the literature on where TED can affect a bipolar-transistor structure. And these are epitaxially grown. I haven't talked about epitaxy yet, but it's a crystal growth technique by CVD, by which you can put in fairly arbitrary and quite abrupt doping profiles. So let's take a look at the inset up here. And in the inset shows schematically the initial epitaxial structure of this bipolar transistor. So there's a region on the bottom in depth, which is n minus silicon. And then there's a region that is p plus. So it's got a high amount of boron-- silicon germanium. And then there's some lightly doped region. And then there's an n minus silicon cap. So it looks like in effect, if you look at the SIMS data-- boron versus depth-- the region down here marked silicon, that's all epitaxial silicon. The region in the center is epitaxial silicon germanium. It's an alloy of silicon and germanium. It's used in a lot of high-speed bipolar transistors these days. And the cap on top is single-crystal silicon. And those are the dimensions. And the silicon germanium region, when it was epitaxially grown, was doped with boron. And in fact here, the as grown profile is the diamonds. It's a little hard to see, but see these diamonds. It's a box. As grown it looks like a box, pretty much. And it has a height, or a doping level of about 3 times 10 to the 19th. So that's the as grown, very abrupt profile by EPI. This little tail that you see here, this is SIMS knock on. So again, this is a SIMS profile, so it has some broadening due to the measurement technique. So that's what it looked like in the as grown. And if you look at the crosses, if you do no ion implant-- so you do not implant the emitter-- after you anneal it, you get something that looks just like the as grown. So with the anneal-- and this particular anneal is 850 for 10 seconds. So it's an RTA. You don't expect much diffusion of the boron. And in fact, you don't get any measured by SIMS. It's still perfectly abrupt. Now the third wafer, which is shown by these triangles, had an arsenic implant shown in the upper surface region-- just like I showed you in that cartoon. And this is actual data. And now look at the boron profile. The open triangles for the exact same anneal, when you implanted the emitter, the open triangles have broadened. That profile is completely broadened. The P concentration is dropped, and now you have these large wings. Well, there's a lot of enhanced boron diffusion. This is TED. And this makes the device essentially inoperable. So you cannot-- the HBT doesn't work when it looks like this. The device is ruined. So a simple structure where you would calculate, oh, this anneal should be no problem. Here you do an arsenic implant on top. And all the interstitials injected cause this dramatic enhancement in the boron diffusion. So it happens in real-life structures, and it's an issue. Yeah, people have found ways around. In fact, from that same article on Slide 9, here's some example of some attempts that were made in that article-- some of them successful, some not-- to get rid of TED. Well, one attempt was people thought, OK, after you implant the arsenic, just do an anneal-- 600 degrees. Low enough temperature so the boron shouldn't diffuse, but maybe high enough to get rid of all the interstitials. OK, that seems maybe like a reasonable idea, but doesn't help at all. In fact, if you look at these diamonds-- again, these diamonds are for-- the anneal that took place at 600 degrees followed by the usual 850 10 seconds. And it's still just as broad as it was before-- plenty of TED. So 600 degree annealing doesn't seem to be a way to get rid of the damage very effectively. Well, the second panel down shows you another idea. How about melt the sample? That's pretty drastic. Hit it with a laser and melt it for a few nanoseconds-- so it completely melts, but it melts for so short a time that nothing moves. OK, that's melt induced-laser annealing and regrowth. And you can see there-- and then after you do that-- you've melted the sample, you get rid of everything, then you do the 850 10-second RTA. And lo and behold, all those wings are gone, so the boron stays put. So one way to get rid of that TED, of course, is to melt the sample, but that's kind of extreme. You need to use a laser, but it does prove that you can get rid of excess interstitials by melting, of course. And then when you do an 850 RTA, there's normal diffusion-- no more of the TED effect. So you can restore diffusion to normal. And this was another interesting-- on the bottom panel, this was a sample where it was just annealed at 850 for 10 seconds, but there was a very high concentration of oxygen-- backround oxygen in the sample. In fact, the oxygen in the sample was about 10 to the 20. So during the epitaxial growth, the silicon germanium layer was accidentally doped with oxygen. It was not really high purity material. And interestingly, TED is completely eliminated in the presence of a very high concentration of oxygen. In fact, you can see these plus signs are the boron profile on a sample anneal-- that, again, 850 10 seconds-- same RTA. The difference is-- and there was the same damage implant done on top-- in the presence of a high concentration of oxygen, there was no TED. So that's kind of strange. And in fact, similar results since that time have been found for carbon. You don't need quite so much carbon. You don't need 10 to the 20th. You can get away with about 10 of a 19th carbon. So people intentionally dope carbon now in silicon, in those regions where they want to get rid of TED. So this was kind of an accidental discovery initially, but pretty soon people started to realize this is a tremendous benefit. So towards the end of today, we'll understand a little better where these interstitials are coming from. But anything you can do to create a sink for interstitials-- be it a high oxygen-- oxygen is not the best element to use. It also kills the lifetime-- minority carrier lifetime. So your bipolar transistor is not the best bipolar in the world. But neither is this one, where the boron base is diffused out completely. Carbon turns out to be a little better to use. You don't need as much. You only need maybe a tenth the amount of carbon. And so because it's in the lower concentration, it doesn't disturb the lifetime quite as much as the oxygen. So people have found, they can dope with carbon without inducing a lot of bad electrical effects, and then completely kill TED. And in fact, this is a topic-- we're not going to get to talk about it too much in this course-- maybe a little-- but this is a good topic, if anyone's interested in researching for their final report-- because we won't get to talk about it. But it is being used in production today. Bipolar transistors, the highest speed, are doped with carbon to eliminate the boron TED. Otherwise, the base width would be really wide. The device would be really slow, and it would be a problem. So it's kind of an interesting story about how that developed. So if you're interested, please think about signing up for that topic-- although I only want one person to do one topic. So you might have to fight a fight a little bit. There's plenty of topics to go around. OK, given that a little bit experimental introduction, now we finally get to talk about, all right, how do people model TED? Well, I showed this slide last time. I just want to review what a 311 defect looks like. Here's a high-resolution electron micrograph. So this is electron microscopy. High resolution-- the electron microscope is looking at a very high magnification. In fact, you can see these little rows of dots, each dot corresponds to a dimer-- to two silicon atoms. And the reason they're in such nice, regular planes is because they are in nice, regular planes. It's a single-crystal material. But here you see this diffraction contrast-- this dark region, where the perfect rows are disturbed-- their symmetry, because there's a defect. And that's what we look at in cross-section TEM. We're looking for diffraction disturbances. And we see that by this dark region. And this dark region has been analyzed to death by microscopists. And it lies along a certain plane, which is typically 311 direction. That's where it gets its name. And what it's believed to be is a ribbon-like defect, where this direction, this arrow points along 311-- and this direction, where the length of the long section into the page is lying along 110. And it is a whole series of dimers of silicon, or imers-- little silicon clusters into this defect. And it may be 100 angstroms long-- let's say, 10 nanometers-- and maybe 30 or so in this direction-- something like that. So it's a little cluster of interstitials, that has a particular orientation to the cluster. And these 311 defects, they form during the first few fractions of a second of annealing. They can even be formed during the implant itself. So their formation is important. More important is, how do they dissolve? They anneal out in the time range that's on the order of seconds to minutes. It could even be longer times at moderate temperatures. So as they anneal out-- once they form, they form very quickly, and then they gradually anneal out. As they anneal out, they give off excess interstitials. And it's these excess interstitials given off by the evaporating 311s that cause the boron TED that people observe. So atomic-level understanding of TED. Let's go on to Slide Number 11. How did people discover this? This seems really weird, or it seems very subtle. And in some ways, you might think of it as maybe partly accidental. People were looking in the microscope for years at these kind of interesting defects that they saw. And in fact, they looked at the kinetics of the defects. They looked at how their concentration changed over time. And they noticed that the time it took to shrink all the 311s, or get rid of them, was about of the same order of magnitude as the time the TED lasted. So people were doing TED experiments. And they said, oh yeah, let's see, at 800 degrees TED lasts for a certain number of minutes. And other microscopists were looking at it and saying, gee, at 800 degrees these 311s hang around for about that same amount of time-- could the two phenomena be related? And in fact, they are. People found that the time scale of TED is the same as that of the time scale of the shrinkage of these defects. And when we're talking about shrinkage, people sometimes use the word evaporation. So the little 311 is sitting there evaporating off, or giving off interstitials from the clusters. This is some experimental data. I took this from your textbook. It's referenced. It comes out of Bell Labs. And it shows-- it's a plot of the silicon self-interstitial density, in number of interstitials per square centimeter, contained in 311 defects. So they're basically looking in the microscope and counting the density of 311 defects and then estimating from their size and density how Many interstitials are in them, as a function of time at different temperatures. Let's take the red boxes. Here at 815 degrees, initially at the very first beginning after the implant, there's a certain density-- maybe mid times 10 to the 13th interstitials per square centimeter-- contained in these defects. And then what you find is exponentially-- so the defect density is sort of constant, then it goes down exponentially after a certain amount of time. It just drops like a rocket. So after 200 seconds, it's gone down by several orders of magnitude. So these 311 defects, a lot of them have disappeared. And so they're dissolving, or evaporating at a certain rate-- at 800. So let's take a look at the red circles over here. Those are at 670-- so a much lower temperature. Look at this. You get a high concentration of them-- say, 10 to the 14th, or mid 10 to the 13th. And then they last a long time-- maybe 10 to the 4th, or 10 to the 5th seconds, and then it starts to drop dramatically. So we're talking at a very long time at low temperatures. So this is sort of the kinetics of the disappearance of these 311s. And in fact, these are some micrographs that people used, I'm just showing you on Slide 12, to obtain this data. This is from Dave Eaglesham at Bell Labs, published in 1994 in Applied Physics Letters. And what he's showing here in the upper diagram, these are both plan-view TEM images. This is a sample of silicon that's been implanted with an intermediate dose-- 5 times 10 to the 13th boron per square centimeter-- into the silicon substrate. And it's been annealed at 810. And this is after 5 seconds. And you see, you're looking at a plan view. And each one of these little sort of dot areas is a 311 defect. So there's a huge density of defects. And the scale bar is a little hard to see, in the upper left from here to here, that's 40 nanometers. That's 400 angstroms. So you have a huge number of these-- a lot of these defects. So again, if I go back one slide to Slide 11, he's starting out here after just a few seconds. That's the number of-- that's where he got this interstitial density, at 815. Now look at the bottom-- on Slide 12, look at the bottom TEM-- bottom left, in B. This is after 100 seconds. Again, the number of defects here is now 311. It's a lot lower. I can almost count them-- 1-2-3-4-5-6-7-8-- of some order. And based on their size and density, you can calculate after 100 seconds. Oops, sorry, let me go back one more. So here we are at 100 seconds, and it's down by several orders of magnitude. So people sat in this paper and counted all these defects-- figured out the silicon interstitial density and how it was changing with time at different temperatures. And so in fact, I'm showing here now, also from your textbook, on Slide 13, this is a diagram that's supposed to give you an idea of the kinetics of what's going on with these defects. So this is simulated, though. What I just showed you was actual data. This is more simulation. So this is on the left axis, or the vertical axis is silicone self-interstitial density. Now this is in terms of-- there's different ways of expressing it. This is per cubic centimeter. And this is as a function of depth. So there's a particular implant that's been done-- 40 Kev boron, 10 to the 14th. And initially this red line represents the initial plus-1 damage. So right after the implant is done, you would assume that you had this damage-- this amount-- number of interstitials. So the interstitial density peaks here very, very high. After a microsecond, it's about the same. But look at it after a tenth of a second, or a hundredth of a second. We're talking about this flat profile right here. So after a tenth of a second at 750, we formed all these 311 defects. And the interstitial concentration is now down to about mid 10 to the 12, or 10 to the 13th per cubic centimeter, according to Supreme. And it's fairly uniform. OK, let's think about that. What is CI star? We have equations in your textbook from a prior chapter. The equilibrium silicon interstitial density is about 10 to the 8th per cubic centimeter at 750. So what is that enhancement ratio? CI over CI star. That's 10 to the 13th over 10 to the 8th, roughly. It's still greater than 10,000. So we've gotten rid of most of this initial damage, but now we have this sort of uniform concentration of-- we have this concentration of 311s. And the silicon self-interstitial density CI over CI star is still much larger than it would be in equilibrium, but maybe by a factor of 10,000, in this case. So the TED is going to occur between this period-- this period here of 0.1 seconds and 1,000 seconds, depending on the temperature that you're at-- how long it takes to evaporate the 311s. During that period when the supersaturation ratio is large, it's going to be some high number-- could be a factor of 10, 100, 100,000-- something like that-- and uniform in depth. So as the 311 defects, they've all been formed, now they start evaporating and emitting interstitials. Eventually, they're all evaporated away, and TED will be over. But for some period, they'll be evaporating, and they'll be holding CI over CI star very high, and therefore they'll be holding the boron diffusivity very high. So let's go on to Slide 14. And this is a model that was proposed in the mid '90s or so for 311 growth. So we have the growth of these clusters and their evaporation, or shrinkage. And this kinetic model, then, is going to determine the time dependence and the magnitude of transit-enhanced diffusion. So what people wrote down in this paper was-- and there's a reference in your text. If you want to read the original paper, I would invite you to read that. You can write down an equation that describes cluster growth. So CL sub n in this equation is just a 311 cluster that has n number of interstitials. n could be 100. It could be 1,000-- whatever. It's a cluster. And if I add to that n number one more interstitial, I get CL to the n plus 1. So this equation could go either way. I can either add an interstitial and grow the cluster. I can evaporate away, or take away the interstitial and shrink the cluster-- either way. So given a simple equation like this, then we can actually write down a time dependence. We can say that the time rate of change of the cluster-- number of interstitials in the cluster-- as a unit-- as a function of time-- the partial-- is equal to this. There's a growth term, on the left, minus the shrinkage term. So the growth term has some equilibrium constant, K sub f, for the forward reaction that says-- where I'm adding interstitials, and I'm growing this thing. So it was a constant case of f times the concentration of interstitials times the concentration of clusters. That is the forward term-- minus the reverse reaction. If I shrink it, I go the other way and I release an interstitial. So the reverse reaction has an equilibrium constant KR and some concentration of clusters CL. So we write a simple differential equation-- time dependence depends on a growth term minus a shrinkage term. And now this is a little hand-wavy. But in your text, we talk about how this forward reaction rate is believed to be diffusion limited. So the interstitials have to diffuse into this cluster. They're created throughout the crystal, but they have to diffuse into the 311 cluster. So we usually say that this forward reaction constant, K sub f, is somehow proportional to some nearest neighbor distance, A, and an interstitial diffusivity-- D sub i. So at least we know what kind of parameters go into that. So that's what we write for K sub f. And the exact form doesn't have to be perfect. We just want to get the temperature dependencies in most of these cases. The reverse reaction-- so going the other way-- for the evaporation of the shrinkage is actually dominated by a diffusion mechanism, as well. So there's this term that's related to the diffusivity of the interstitials, again-- it's actually like-- this looks like a hopping frequency-- times a Boltzmann factor, E to the minus EB over KT-- now, where EB is the binding energy. So again, if I'm going to get rid of an interstitial from the cluster, it's going to be bound by some energy. Somehow there's a lower energy to be in the cluster. I have to overcome that to get rid of it. And that's why you have this E to the minus EB over KT. So simple differential equation with 2K sub f and K sub r, that have certain temperature dependencies. Let's go to Slide 15. So the interesting part of this differential equation-- the interesting time is the period where steady state exists between cluster growth and cluster evaporation. So in fact, to make the equation easy, we're just going to solve for the case when it's equal to 0. We're going to solve for the case where the cluster concentration isn't really changing very much. And we're going to say that's equal to 0. So that's relatively easy. Let's just go back. I'm just setting this equation equal to 0. And now I'm going to solve now for the concentration of interstitials, because the cluster concentration drops out, right? So everything I can solve, then, for the maximum interstitial concentration just in terms of the ratio of KF and KR. So going back to your Slide 15, I can then figure out a maximum concentration of interstitials that are trapped in these 311s. It's just KR over KF, and it looks like this-- depends exponentially on the binding energy over KT. So that is an estimate of the maximum number of excess interstitials. If I divide that by CI star, that gives me the supersaturation ratio. Again, why do I want that? Remember in our discussion of diffusion, diffusion coefficient goes diffusivity times F sub i times CI over CS star. So whenever you enhance CI over CI star, you enhance the diffusivity-- say, of boron. So I write down this ratio. And we have to use the CI star. We use the formula for it coming from Chapter 3. And CI star brings in with it a formation energy of the silicon interstitial, that we're calling E sub f. That's not the Fermi energy, that's the formation energy. So this is the formula that was used for CI star. It's got a pre factor-- CI 0-- and an exponential-- E to the minus EF over KT. So the formation energy is about 3 electron volts, roughly, for silicon interstitial. The binding energy-- people have fit data to show that the binding energy of a silicon to this 311 cluster is about 1.8 electron volts. So people have estimates for this term here and this term here. You can put in estimates for, you know, the nearest neighbor distance, and CI 0 can be estimated. So you can plot this equation-- CI max over CI star-- as a function of temperature. And in fact, that's what Slide Number 16 shows. This is a plot of a simple equation as a function of temperature-- the interstitial supersaturation ratio as a function of temperature. So what does this tell you? Well, it gives you an idea of the maximum enhancement in the diffusivity, right? Because if I'm going to say that boron's diffusivity depends on F sub i times this ratio-- and let's say, I'm at 800, and I'm in a period where I have TED at 800, CI over CI star can be as large as almost 10 to the 4th-- maybe mid 10 to the 3rd. So it gives you an idea of the magnitude of the enhancement in the boron diffusivity. Relatively simple derivation-- you can get an idea right off the bat of how much CI over CI star can be enhanced during the steady state period. Now, eventually, it's going to end. Eventually, all the silicon will evaporate-- the 311s. There'll be no more excess interstitials, and TED is over. So the question on Slide 17 is, how long is that? So now I know how much CI over CI star is boosted up-- simply a function of temperature. You can pull it right off that plot. Now the question you ask me-- well, I know the temperature. How long does this event last? How long does it take to evaporate all these 311s? Well, eventually over time, the 311s all evaporate and CI over CI star goes back-- actually, it should go back to 1, basically. So here's an example, on Slide 17. This is from a Supreme simulation. This was for boron TED. And we're annealing at a certain temperature. This is the implant that was done-- 10 Kev boron, 10 to the 14th atoms per square centimeter. And here's the as-implanted concentration profile here, shown in blue. So that's the boron, just to give you an idea. And then after 1 minute at 750, this is how the boron has diffused-- the black line. And look, at 1 minute at 750, the dashed line refers to the right hand axis. Sorry, you should write that in. So this dashed line is to the right-hand axis. That's the interstitial supersaturation ratio. At 750 it's about 10 to the 4th. And it's pretty uniform throughout the sample. In fact, I could probably have put that off from the last plot. If you go back to your last slide-- 16-- 750, look it up-- CI over CI star is about 10 to the 4th. So you can use this as a simple back-of-the-envelope way of calculating the amount of enhancement-- CI over CI star. And then after 10 minutes, look how far the boron has gone. The boron profile has gone quite a ways. The junction depth now here is about 0.4 microns. So that's the red profile-- solid line. And the dashed line here, you should also reference that, to the right-hand axis. The interstitial supersaturation ratio is now down. Instead of 10 to the 4th, it's 10 to the 2. It's about 100 after 10 minutes, because we're getting rid of more 311s. And eventually the number of them is going down. So this thing only lasts so long. And so the question is, how do I figure out-- a back-of-the-envelope calculation-- how long I need to know that this enhancement is going to last as a function of temperature? So let's go to Slide Number 18. And here's a way-- a simple quick and dirty way of estimating it. What we say is that all these excess interstitials, how do we get rid of them? They are emitted by a 311. They don't just sit there. They're going to diffuse, OK? Where are they going to diffuse? Well, they're going to diffuse into the bulk and recombine in the bulk. And they're going to diffuse to the surface. And the surface, you know, is a perfect place to have recombination, because there's lots of vacancies at the surface. So an extra silicon atom can always be accommodated at the surface. So the diffusion, you have this-- let's say, this initial profile of excess interstitials. This is a very crude profile, as a function of depth. So this red line is meant to represent the surface. Moving from left to right, we're going in deeper in the sample. I have some dose Q of interstitials, that I've implanted. How did they get there? Well, they got there because they were recoiled. So here's my dose Q. And it peaks at some projected range RP. OK. And the surface being the dominant sink, especially if it's a low-energy implant, I'm going to have a certain flux of interstitials towards the surface. Well, how can I write that flux? Well, we can write it as the diffusivity, D sub i, times the peak concentration, CI max over RP, where RP is the distance-- where the peak is the range of the implant. How do we get that? Well, remember Fick's law. Flux is a diffusion coefficient times a concentration gradient. Well, here's a very crude way of estimating the concentration gradient. It's just the peak concentration divided by the distance over which it recombines, assuming they all recombine at the surface, or it recombines to 0. So here's a very quick estimate of the flux towards the surface-- DCI max over RP. Now, I need to know, how much time does it take to dissolve all those? It's going to be the dose Q, which is whatever I implanted, divided by that flux. Here's my flux. So at the bottom of Slide 18, I've just taken the dose, and I multiply it by 1 over the flux. So the time to dissolve these clusters is going to be Q times RP divided by the diffusivity of the interstitials times the maximum concentration-- CI max-- the maximum concentration. That gives me an idea. I have an idea-- how long will TED last? Well, it lasts until all the clusters have been dissolved, or evaporated. And you can see, if I implant a higher dose Q, I expect TED to last longer. If CI is larger than the concentration, then you expect the time to last a little shorter. So this gives us a rough idea. In fact, we can simplify this and put it in terms of a temperature dependence, if we go to Slide 19. So here is the time that TED lasts-- tau-- as I just showed it from the last slide. Now, I'm going to put in what CI max. We just calculated the maximum interstitial-- excess interstitial concentration induced by the implant. Remember, we said it was KR over KF. So we have that in terms of a Boltzmann factor on the binding energy. And so I can just substitute that in. And then in the middle of the slide, we can write down the time to resolve with the clusters, showing here. Well, it depends on Q, the dose implanted, RP, the diffusion rate, and an exponential of the binding energy of the silicon atom to the cluster of the silicon interstitials. We can put in what the diffusivity looks like for silicon interstitials. It's a cost prefactor times E to the minus E migration over KT, where people have estimated what that migration energy is. And here we have a relatively simple expression for the time constant-- the length of time that TED lasts as a function of temperature. It depends exponentially on the temperature. And inside the exponential over KT, I have the binding energy plus EM, the migration energy. So I have a nice little formula for how long TED lasts. It depends on the dose too, in RP. So let's go to Slide 20. Here's an actual example, using that simple formula. We're asked to calculate and plot how long TED lasts and how it depends on temperature for an implant of 40 Kev phosphorus at 10 to the 14th. Well, from the figure-- you can go back to Figure 8.3. It's just RP as a function of energy. You can figure out the projected range. It has an RP of 60 nanometers. And you have everything you need to in this formula. You have the dose. You have the RP estimate. We know EB-- binding energy is 1.7. The migration energy is 1.8. We add all that up. And here is the equation. So this gives us the time dependence for this particular energy and dose. And there it is on Slide Number 21. So this is the time-- the duration in seconds-- how long TED is going to last for this particular dose and energy of phosphorus as a function of temperature. And lo and behold, look what it shows. At 1,000 degrees, or maybe 950, TED only lasts about a second. OK, so it's very short. Because you form the 311s, and they have a pretty high evaporation rate. If you're down here at 800, TED lasts, maybe for this implant, about 100 seconds. OK, so it lasts a lot longer-- minutes. And the temperature dependence of this time is given by this equation. Now, if I have a higher dose-- let's say, I do 10 times the dose-- what happens to this line? The red line, does it stay the same? Does it move up by a factor of 10 times longer, or 10 times shorter? Based on-- let's just go back one slide. What is the dose dependence of that time? Depends directly on the dose, right? So if I implant 10 times the dose, TED is going to last 10 times as long. See, the Q dependence. So you can use this in a lot of your homework problems, or anytime you have a problem. Assuming the RP is about the same, you can use this nice curve, because it just scales linearly with the dose. So if I implant 10 to the 15th, then at 1,000 degrees, instead of lasting a fraction of a second, it could last a full second. At 800, instead of lasting 100 seconds, it could be 1,000 seconds. So that's the interesting thing about the dose of the implant, it tells you how long TED lasts. Because the dose injects a certain amount of these interstitials. They come together to form 311s. Depending on how many you originally form, it'll tell you how long it takes to get rid of them. And how long it takes to get rid of them is a strong function of temperature, as well. So Slide 22, I took this equation from your text. It looks a little mysterious. It shouldn't be. What this is saying is the overall amount of profile motion-- remember, we said, we could multiply-- we had something called the DT product. It gives you the idea of the thermal budget. The DT product effective of a dopant, remember, it depends on its equilibrium diffusivity, D, times CI over CI star. Now, instead of putting down time-- the total time of the implant of the anneal-- I'm going to put down tau enhance. Tau enhance is the time that TED lasts. So you can plug-in for those things-- CI over CI star and tau enhance-- and you get something that looks like this. OK, so it depends on the diffusivity of the dopant, the self-diffusion coefficient of silicon interstitials, the concentration of silicon in the dose, and RP. So the purpose of showing this is it explains the backwards, or anomalous temperature dependence of TED. Even though the dopant diffusivity has an activation energy of about 3.5-- so that's what you would expect for this thing-- this is being overwhelmed by the activation energy of the self-diffusion. So here we have DA-- the activation energy of the dopant diffusion-- might be E to the minus 3.5 over KT. But in the denominator is the diffusion of silicon itself, which has a much higher activation energy-- E to the minus 4.8 over KT. So this is how we can-- when we ratio these two, we end up with a positive in the exponential. So this is how it's possible to get more broadening at a lower temperature. That's just from looking at the equation. The thinking behind this-- the rationale is you have a fixed amount of damage introduced by the implant, but the background point defect concentration-- CI star-- and the interstitial self-diffusion coefficient, they go down rapidly with temperature. OK. So again, the amount of profile motion depends on the ratio-- CI over CI star. So CI star is going down very rapidly with temperature. So the supersaturation is then going up as temperature is lowered. So it's partly because CI star goes down that this ratio is lowered, and so you get more TED at low temperatures. And also the duration of the excess interstitials is longer at low T, due to the lower diffusion coefficient. We just saw that. Look at this plot-- Slide 21. The lower I go in temperature, the longer it's going to take to evaporate all those interstitials and get rid of them. So TED lasts longer, and it can have a stronger effect at low temperatures. In fact, Slide 23 is just an example of that taken from Supreme simulation. This is a boron profile annealed at different temperatures. So its concentration versus depth. The solid lines here represent the boron, and they refer to the left axis. The dashed lines represent CI over CI star. They refer to the right axis. So let's see what happens to boron at different temperatures for the same amount of time-- 900 degrees, 800 degrees, 700 degrees. Look at CI over CI star at 700-- much higher than at 800 and also much higher than at 900. And it lasts a lot longer at 700. So we do a 30-minute anneal at 700, the entire period is transient-enhanced. Whereas if I do an anneal at 900, how long does it last? Well, of this order of magnitude of a second-- so only the first second is enhanced, and then TED goes away. So that's how you can explain a low temperature anneal ending up with so much more motion. It's got a higher CI over CI star, and it lasts longer. Slide 24-- just going through a little more of the subtle effects. And you can read through this in your text. We have this sort of equation that we just wrote down for the DT product, which is a measure of the amount of broadening for transient-enhanced diffusion. And the question is, how does it depend on energy and dose? Well, on the left-hand plot, this is some actual data that was taken. People measured the square root of DT-- how much does the does the dopant move as a function of the energy of the implant that was inducing the damage? And as you can see at high energies, you actually get a saturating effect. You see, this equation predicts that the amount of motion should depend on RP, linearly. So if you go to a higher energy, you expect a higher amount-- a larger amount of broadening. And that's true in the beginning. So here we go low energy here-- 10 Kev, 60. We do see this increase. And it starts to saturate out. And that's because high enough energy, the surface is no longer the dominant sink. In fact, you get bulk recombination as the profile gets really deep. And remember back here when we were doing this simple derivation-- back-of-the-envelope derivation on page 18-- remember, we were saying-- we ignored the diffusion flux that went into the bulk and recombined. For a simple back-of-the-envelope calculation, we said, all the interstitials diffuse towards the surface and recombine there. When you get them deep enough in the bulk, that's not necessarily the right model. Again, just to show, this is a little back of the envelope. So in real case, the motion-- the amount of TED will actually saturate at some energy. So take this RP dependence with a grain of salt. Same thing for the dose dependence. This is again the square root of DT-- the diffusion distance. Now this time versus anneal time, it turns out, if you look at different doses-- indeed, if you do a long enough time here, which I'm showing here-- say, close to 1,000 seconds, or close to 10,000 seconds-- you can see a dose dependence. And in fact, this open bullet is for 10 to the 13th. The closed bullet is 4 times that. And indeed, you do see about 4 times the dopant motion. So it does scale with the dose, but only at long enough times. At short times the dose dependent is not so evident. And that's because, again, we did a very simple calculation of that differential equation. We solved only for the steady state period, which takes a certain time to develop. So this is more of a second-order effect. So this equation, again, you take it with a grain of salt. It's meant to give you a back-of-the-envelope estimate. So on Slide 25-- when you walk away today, this is the general picture you should take in your head of TED. And what is, it's a plot of the enhancement in CI over CS star in a log scale as a function of time. And so what we say is we have some steady state period, and its length is called tau enhancement. It's the time during which the 311 clusters are decaying. It lasts a certain period-- tau. And that period depends on-- the duration depends on the dose, of how many interstitials I put in. And then after that period, it just rapidly exponentials-- exponentially decays. So the critical parameters are, what is the supersaturation height, or the level, which depends on the temperature-- as we know, it goes up as temperature goes down. And what is the duration of that steady state condition, tau enhanced? So with those two pieces of information, how high is it enhanced, and how long does it last, you can do a lot of predicting, roughly, of the amount of profile motion you're going to get due to the TED effect. OK, so let me just summarize what we've said about ion-implanted TED. Ion implanted is the dominant method for putting dopants in. We talked about Range statistics, that we can calculate using simple distributions. We talked about Pearson IV in amorphous targets-- how it works very well. Ion channeling has to be modeled by something more complex and complete, like Monte Carlo. Next time, I'll show you some ion channeling profiles and how they're fit by dual Pearson. We spent some time today talking about the plus-n model from for residual implant damage. It tells you that you get roughly n excess interstitials per primary ion. These excess interstitials all get together. They cluster into these 311s very quickly-- in less than a tenth of a second. And they dissolve very slowly. And their dissolving process, that gives rise to TED. And we have a simple model for TED. They can explain the time, the temperature, the dose, and, roughly, the energy dependence of TED. And so next time in class, we'll do a few calculation examples. If you can bring a calculator, we'll do some calculations together. If you have a calculator, it'll be easier. We'll actually do some calculations-- back of the envelope calculations on TED kinetics. And that clipboard is going around. Does everybody have that? Please sign up for your project, and at least sign up for whether you want to do written report or oral report. OK, thanks. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 3_Crystal_Growth_Wafer_Fabrication_and_Basic_Properties_of_Si_Wafers_cont.txt | JUDY HOYT: So we're moving along pretty quickly with the first part of the course here. I'm hoping you'll be reading Chapter 3 of the textbook at this point, which is on-- covers crystal growth, wafer fabrication, and some of the basic properties of silicon. I'm going to finish up Chapter 3-- the lectures on Chapter 3 today. And then I'm going to start Chapter 4, so you can start that as well. Chapter 4 is relatively easy reading, not tremendously mathematical. It's about how we-- although it's a very important topic. It's about how we clean wafers and how we get our impurities in wafers. So the last lecture, let me just remind you what we talked about. We talked about cubic crystal structures. We discussed wafer manufacturing, basic characteristics of Czochralski wafers. And we'll continue on with that a little bit today. We presented a very simple mathematical model differential equation for-- based on heat flux for modeling the Czochralski growth process. And, at the very end, I had three slides on the different methods of fabricating SOI, silicon insulator. And, in fact, if you're interested in SOI and getting into it more detail, that's a topic you might consider for one of you-- someone to do for your final project. There's a lot of literature out there on how SOI is really made. I only gave you three slides. I'm sure the class would like to hear more about it if you want to do that as your research topic. Today I want to talk about more specifically the mathematics and the statistics of point defects in silicon, a little bit more detail about carbon and oxygen. They are so critical to Czochralski silicon and its properties, and then introduce Chapter 4. So let's go on to the second slide, page slide two. We're going to just-- since we're going to talk about modeling point defects, let's just define them once again. We did this last time, but this is a very schematic ball and stick two-dimensional diagram of what a lattice might look like, let's say, highly, highly simplified. And the vacancy is shown here in this region where a silicon atom is completely missing from a lattice. You just pulled it out. And there are broken bonds associated with that. An interstitial is a little bit more complicated. There's actually-- there's two different types of interstitials here. There's one which is shown right here. And this is a pure interstitial. It's actually an unbonded extra silicon atom, like you just stuffed an atom in there in an interstitial space. It's not bonded to anything. That's the pure interstitial. Some people, the defect experts, also refer to a type of defect which is an interstitial type called an interstitial sea, by analogy to a vacancy. And what it is, it is an extra atom, but it's actually two atoms that are sharing a lattice site. So these two are together. And there is some bonding to neighbors as opposed to a pure interstitial. A lot of people believe that the interstitial sea is actually the most likely configuration of the interstitial defect in silicon because it takes a lower energy to create it. Just to share a lattice site isn't that hard compared to stuffing an unbonded atom in there. But the truth is, the distinction is really not important in process modeling. For the purpose of this class, for modeling processes, there's two types of defects. We think of the i-type, the interstitial type. Exactly how it's configured, we're not sure. We'll just use the symbol I, capital I, to refer to an excess silicon atom. And these point defects, these V and I, they turn out to pay really fundamental role in a lot of processes. They control diffusion. When we get to Chapter 7 and 8, we're talking about diffusion, extremely important. The oxidation, the point defects play a very important role activating dopants. So there's a lot of importance. And that's why we're going to spend a little time today on understanding the statistics of vacancies and interstitials. So let's go on to slide 3. If you're interested, I've listed here five papers, all of them fairly old but classic. Shockley, the Shockley, who also did the transistor studies, had a paper in the late '50s on the first understanding of statistics for vacancies and their charge distribution. Shockley and Moll, also famous names, Watkins, did some of the very first studies where people actually detected vacancies in the lattice using a type of technique called EPR, electron paramagnetic resonance. So, if you're interested in that, you can go back to some of the original papers. People actually have some evidence that these things do exist. Let's go on to slide 4. And we're going to talk about modeling these native point defects. Basically, the presence of native point defects, that is, vacancies and interstitials, it minimizes the free energy of the crystal. It increases the configurational entropy. If you want a derivation of that, you can look at Mayer and Laus, that's one of the textbooks I referred to earlier in the class. Their 1988 text has a little derivation on that. But, basically, it seems physically intuitive that we can write the following expression for the concentration of neutral vacancies and neutral interstitials. And the notation here is a little bit cumbersome, c meaning the concentration. The star here, the superscript star means-- the asterisk-- that we're in equilibrium. We're going to talk a lot about in this class when we're not in equilibrium, when we have injected excess vacancies by various processes, ion implantation, or whatever. But, for now, we're going to stick to equilibrium. This subscript tells you the type of defect, vacancy or interstitium. And then, this little zero means neutral-- uncharged. Well, as we'll see today, vacancies and interstitials can exist in charged states. But, for now, if we're just talking about the neutral and the uncharged vacancies or interstitials, we can write them in this type of expression. They have a temperature dependence. They depend on a constant, ns, which is the number of lattice sites. We know that. That's 5, 10 of a 22nd. And they depend exponentially on the formation entropy of the defect, which is this s sub f and on the formation enthalpy of the defect over kt. So we expect to see this thermally activated behavior. So the concentration of these things is going to go up exponentially as I increase the temperature. In general, down here, the equation at the bottom is true. The concentration of neutral interstitials is not in equilibrium. It's not necessarily equal to that of vacancies. It may be easier or harder to form. So don't think that they have to be equal to each other, because there's a lot of different ways to form these two different types of defects. Let's go on to slide 5. In fact, let's talk about a couple of different ways we can take a perfect crystal and form these defects. The simplest way-- and if the way you would probably imagine, if I asked you is, well, how do we create an interstitium and a vacancy? Well, it just pull a silicon-- rip it out of its lattice site and stick it somewhere, an extra atom, then have a vacancy where the atom was left behind-- where the space was left behind, and I have an interstitial. In fact, this pair right here, this vacancy interstitial pair, is called a Frenkel pair. Incidentally, this can happen. People believe it costs a fair amount of energy. That is, it's somewhat hard to do because you have to break all four bonds because you're in the bulk of the crystal. Remember, in silicon, every silicon atom is four-fold coordinated. It has four nearest neighbors that are with covalent bonding. Obviously, the Frankel process is going to create one interstitial for every vacancy. So if Frenkel were the only mechanism by which we could create point defects, then c sub i would be equal to c sub v in equilibrium. But it turns out that's not the case. It turns out surface processes are a dominant way of creating these point defects. For example, we can have generation and recombination of vacancies or interstitials at the surface. And they may create only one type. For example-- and this is, again, a schematic. This is meant to be now a cross-section of a lattice. So imagine up here, this is the top of my lattice. Above it is free space below it is the wafer. And so, this is the top row of surface atoms. So I can take an atom off the surface. And it's not for a full bonded because, by nature, the surface is a point of discontinuity. There's only a couple of bonds for an atom at the surface. I can-- doesn't take much energy to pull that out and stuff it-- and put it in the bulk and create a silicon interstitial. So there's a lot of different ways. The surface generation and recombination is very common. The crystal has a number of different ways to achieve different concentrations of vacancies and interstitials, even in equilibrium. And when we change the temperature of the crystal, the concentration of the vacancies, the concentration of interstitials will change by the creation of either Frenkel pairs or atoms moving in and out of the surface and interstitials diffusing in. So we'll go on to page 6, or slide 6. It turns out that the equilibrium, no one's ever really measured exactly the equilibrium concentration of v and i by any direct process, particularly at the silicon process temperatures. The truth is, the concentration is so small compared to numbers-- physically reasonable macroscopic numbers, like the number of silicon atoms. The number of silicon atoms 5, 10 to the 22. We're going to see that vacancy concentrations are in the 10 to the 12 range, one part in 10 to the 10, much, much lower. And so, it's just very hard to measure them directly. But people have estimated them. And the way they've estimated them is going to-- a little bit circular argument. But it's going to be come out in this course, by fitting impurity diffusion data. In other words, people hypothesize that the diffusivity of a dopant requires the formation-- the existence of certain vacancies. And then, they see diffusivities increasing and decreasing according to temperature or charge states they say, oh, the concentration of vacancies must be going up by this much. So people measure diffusivities and infer concentration of point defects. It's a little bit circular but, in any case, that's how a lot of people have estimated these things. So don't expect to find anywhere in the literature some perfect equation. There's a lot of controversy. But there's some agreement these days that these are rough numbers. So here's a rough equation for the concentration of neutral interstitials in equilibrium. It goes something like this. Again, it's exponentially activated. As I increase the temperature, I get a lot more-- I get more interstitials exponentially. But if you calculate it, put in room temperature here, you calculate it out, it's close to zero. I mean, there's just-- there aren't very many at room temperature of these things. The concentration is quite small. At 1,000 degrees, that's where we might be doing an oxidation. We put a wafer in a furnace and oxidize it at 1,000 degrees or something, or just anneal it. At 1,000 degrees, it's something like in the range of 10 to the 12 to 10 to the 14, still very small compared to 5, 10 to the 22. Nevertheless, it can have a huge impact on the dopant diffusion. Typical doping levels are in this 10 to the 14 to 10 to the 15 to 10 to the 20 range. And, again, the point defect concentrations are smaller than these. This will have some important implications. What it means is that the doping, the dopant concentration, the concentration of arsenic or boron, is what sets the electron concentration and the hole concentration in the crystal, not the vacancies or the interstitials. Let's go on to slide number 7 and a little bit now about-- so that all was about neutral. So, as we were saying, as we increase the temperature, the concentration of neutral vacancies and interstitials goes up exponentially. But it turns out, these defects can have charge states as well. In fact, they can be singly charged, positive or negative, or doubly charged, double plus or double negative vacancies. They've been identified experimentally using things like this electron paramagnetic resonance. And particularly when people bombard the lattice with high-energy electrons and then they see what kind of vacancies they can see by these techniques, and they do see them in various charge states. Interstitials are believed to have similar charge states, but they really haven't been measured-- the charge states of the interstitials are more difficult to measure, probably because the interstitials diffuse throughout the lattice much faster than the vacancies. We're going to see the interstitial diffusion coefficients themselves for excess silicon atoms. They can go whizzing through the lattice. So they're even harder to see. So let's look at another, again, a very two-dimensional schematic way of looking at a vacancy. That was what I'm showing here at the bottom of this page. The silicon atoms are these round dots in black. And these are supposed to represent bonds, these ellipsoidal things, it's supposed to be a covalent bond. Each bond should have two electrons associated with it. Remember, silicon is in the fourth column of the periodic chart, so it has four valence electrons. So this atom here can be completely satisfied with four covalent bonds. All of its electrons valence electrons can be bonded. If I rip out a silicon atom here, the vacant site, it's going to distort the nearby lattice. It's going to have some change in the bonding configuration. And this causes a local change in the energy band structure. For those of you who've had solid state physics, you know about energy bands. It's going to cause extra states, actually, in the energy gap. And we're going to call these deep levels. And we'll show you how we represent them-- are going to be introduced by the presence of these defects. These deep levels exist in the bandgap, and they're split off. They're not in the conduction band or in the valence band. So having a vacancy or an interstitial creates a localized deep level in the bandgap. So let's take a look at page 8 from your handouts, or slide 8. And this slide shows very schematically or approximately the location of where people believe they can see energy levels exist in the silicone bandgap. Now, hopefully, you've read through Chapter 1. If you haven't, you can, right after this lecture, you can run out and read Chapter 1. And so, in that chapter, it talks a little bit about the silicon bandgap. This line up here is meant to represent the conduction band energy. Above this there are lots of free electrons. This line down here at the bottom represents the valence band energy, the top of the valence band. In between there's a forbidden gap. So the carriers cannot have energy-- free carriers don't exist with energies in that energy range. And we know this bandgap is something on the order of 1 electron volt, 1.1 for silicon. These lines here, the line marked v-double-minus, the line marked v-minus, v-double-plus, and v-plus, they represent the positions in the energy band-- in the forbidden gap of these defects. These are localized states that are formed by the fact that we're ripping electrons-- we're taking silicon atoms out of the lattice. We no longer have a perfect lattice. But we're putting an extra interstitial silicon atom in the lattice. So and these are in various charge states. This level here, in the mid-gap, it's right in the middle of the bandgap is marked ei. That's the intrinsic energy level. So that's, by definition, that's pretty much close to halfway, right at the mid-gap point between ec and ev. And this line right here, marked e sub f, is the Fermi level-- the Fermi energy. And, again, hopefully you've read Chapter 1. Fermi energy is a concept that comes in. It basically is the energy level at which the probability of finding an electron is 1/2. That's the exact definition. The main thing you need to know about the Fermi level is, if the Fermi level is at mid-gap, the semiconductor is intrinsic. There are very few electrons and holes. The only electrons and holes we have are those created by thermally breaking a bond and ionizing an electron from the valence band and bringing it up to the conduction band. So the intrinsic concentration is going to be relatively low. As I add n-type dopants, the Fermi level moves up. In fact, this picture is for where I've added a certain type of n-type dopant and it moves up towards the conduction band. The higher it goes towards ec, the more free electrons we get in the crystal, the more free electrons I have in the conduction band. If the Fermi level moves down towards ev, I have more holes. So you can think of it as the position of this level in the band gap going up creates more electrons. As more holes are created, it goes down. It's just a way in energy space of keeping track of those densities. So let's take-- let's just convince ourselves that it's possible for these vacancies to exist in charge states. And so, here's a very crude picture of a vacancy-- of a neutral vacancy. So what's happened here is we've taken the silicon black atom out in the center. And we've taken out all four of its valence electrons. They've been pulled out. So what happens is, well, again, each one of these little ellipsoids is supposed to have two electrons in it. And this is supposed to represent one dot, one electron from this atom. And this represents one electron from the atom down here. So there's a covalent bond here. There's some kind of a bond. There's a bond here. So this is all satisfied. So there are-- this area of the crystal is net neutral. There's a vacancy there, but there's no extra electron or absence of an electron. So we call-- we can imagine this in your mind as a neutral vacancy configuration. Again, it's very crude but it gives you some pictorial idea. In the center, at the bottom, this picture represents a single negatively charged vacancy. So what we do is we have a silicon vacancy. We ripped out a silicon atom. But we left behind an extra electron or an extra electron found its way there, however you want to say it. And here it is. Here's an extra electron sitting here. And so, then if you count up the electrons here associated with this region, we have one extra. So this has a single negatively charged net in our charge state. So this can represent a v-minus pictorially. And, similarly, we can take out a silicon atom and we can pull an extra electron out. So not only the four valence electrons, but let's take even a fifth one out and remove it. And now, this region has a net positive charge associated with it because we have the nuclear-- we have the charge of these silicon atoms, and we've unbalanced them. So now we have a v-plus. So it's certainly possible, you can imagine, depending on this vacancy region, how many electrons are sitting around it to have these different charge states. And people have detected these unpaired spins of these electrons, the fact that there's an extra unpaired spin here and there's lacking unpaired spin here, they've been detected by this technique called EPR. So we believe that these different charge states can exist. Let's go on to page 9 now and talk a little bit more. We want to talk a little bit more about the statistics. And, again, I'm showing the energy band diagram. This time, on page 9, we're showing the energy band diagram where the Fermi level is right at mid-gap. So the material is said to be intrinsic. And so, the only electron and hole pairs that are created are those created equally when we break bonds and the Fermi level is at mid-gap. Let's say we're at 1,000 degrees Centigrade with a semiconductor. You look up in Chapter 1, there's a simple expression for the dependence of ni, the intrinsic carrier concentration on temperature, and it's about 7 times 10 to the 18. So there's that many intrinsic free electrons and that many intrinsic free holes in the material just by virtue of the temperature. The temperature allows a lot of bond-breaking, a lot of electrons to be released from the valence band up and energized into the conduction band. So the Fermi at this temperature, the Fermi level is-- for any doping level less than 7, 10 to the 18, the Fermi level is at mid-gap. Now, we have to recall something about how donors and acceptors levels are ionized and whether they're occupied by electrons or not. And, again, if you've had solid state physics, you'll recall this. Otherwise, it's discussed in Chapter 1. But a shallow donor or acceptor is usually ionized. And what do I mean by a shallow donor? I didn't include a picture of that. I should have. Let me just check out here if this chalk box will open which, of course, is-- let's just-- what do I mean by a shallow donor? If this is the energy bandgap and the forbidden gap, so to speak, and those are the valence and conduction bands, a shallow donor is like arsenic. Remember, we had arsenic or phosphorus. And it's shallow because it's energy level-- it's a deep level, or it's a localized state also. But it's very close to the conduction band, say within a few tens of millivolts, 0.03 electron volts. And so, what happens is that those donors can very easily donate electrons. They're easily ionized. And they donate a free electron to the conduction band. That's how they dope the semiconductor. Each one, each donor, by putting 10 to the 18, each one donates its electron to make a free electron, and it's very easy for that process to happen. So they're usually what we call ionized. That is, the electrons have been donated and they leave behind a net charge. If the level is deeper, that is deeper in the bandgap, say, like v-double-minus, or v-minus, let's say, down here, this is where my v-minus level is, its energy. So it's much further away from the conduction band. Deep donors are only ionized when f is below them. So the Fermi level would have to be below here. The Fermi level is below here, as I'm showing here. Then this is ionized and it's put an electron up here. So we have to remember these rules for deep donors. We don't usually think about them for shallow donors because they're all ionized because they're so close to the conduction band. A deep accepter, on the other hand, is ionized only when the Fermi level is above it. So these are rules from solid state physics and statistics. And if you know them, it helps. Otherwise, you have to take them as rules for the purpose of this course. But if we look at-- if we're in intrinsic material and we look at all these levels, these are deep acceptors and deep donors. In this particular case, the Fermi level, which is right here, ei, is above the vacancy donor level. So it's above these guys. So these will not be ionized. And same thing here. The Fermi level is below these vacancy acceptor levels. It's down here. So these levels won't be ionized. So what this means is that, in intrinsic material, so it's intrinsic at that temperature, the neutral vacancy, which does not depend on the Fermi level position, that's going to be the dominant vacancy charge state. Its v0 is what dominates when you're in intrinsic material. As I now take this Fermi level and move it up, we'll see that these guys will dominate. You'll get more and more of these charge vacancies. And if I move it down, we'll get more and more of these. But, if we're neutral, if we're intrinsic, then it's the neutral vacancy concentration that dominates. And we'll show some equations to actually how to calculate that. So let's move on to slide 10. And now I'm doing a different case. Notice, I've moved the Fermi level away from the mid-gap. It's no longer in ei, it's up here. So I have doped this with arsenic or something. I've made it n-type. So I've moved the Fermi level closer to ec. The closer I move it to ec, the more n-type I make it, the more electrons we have. So I'm going to assume the material is extrinsic. What does that mean? That means the n-type doping concentration is greater than ni. That's the definition. So if I were to dope this with arsenic to, say, mid 10 to the 19, or 10 to the 20, I would be well above-- orders of magnitude above ni. And so, it's extrinsic. The electron concentration is controlled by the donor concentration, not by the temperature. So, in this case, I've doped it. The Fermi level is here. It's now-- the ef is now above the v-minus level. So ef is up here above the v-minus level. So it means this level is occupied with an electron. So it's acting like an acceptor. And, in fact, v-minus is going to be, in this particular case for this position of the Fermi level, it will be the dominant vacancy charge state. So as before, when I had it mid-gap, the dominant charge state was neutral. As I move it up, all of a sudden, now I've got lots of v-minus. If I move it up even further, above this v-minus, v-double-minus, I'll have lots of those. So as I move the Fermi level up and down, just imagine, with doping, I can control the concentrations of these types of vacancies, these charge vacancies, either v-minus or v-plus. If I move it down in p-type material, all of a sudden, I get lots of these, v-plus and v-double-plus. So just by changing the doping, I can create more vacancies and more interstitials. So it's not just temperature, because I'm changing their charge-- the charge populations. So let's take a look at page 11, or slide 11, where we actually do this quantitatively. Now, I've been talking very qualitatively. I say, we move the Fermi level up, we get more of cv-minus and cv-double-minus. Well, there's actual simple mathematical equations that Shockley's paper first wrote down on these charge point defects. And, in fact, they obey exactly the same statistics as are described in Chapter 1 of your textbook for shallow donors and shallow acceptors. In your textbook, in Chapter 1, I think there's a simple equation that was written down, just gives the electron concentration n is equal to some constant, which we happen to call n sub c. And we said, it depends exponentially in Chapter 1, if you go back and read that, on the distance between the conduction band energy level and the Fermi level. That's how we-- Chapter 1, there's a simple derivation of this relationship. So the electron concentration, as you take this shallow donor and move it around a little bit in the bandgap, you'll change the electron concentration exponentially according to the distance between ec and e Fermi. You may see this-- if you take solid state physics-- people rewrite it sometimes to say that n-- there's another way to rewrite this. And instead of referencing it to the conduction band energy, sometimes people reference it to mid-gap, e Fermi minus ei. Either way, it's essentially the same relationship. Just says, the electron concentration goes up exponentially depending on the distance between the Fermi level and either the conduction band or the mid-gap point. The exact same thing-- so this is the concentration of electrons. This is, instead, the concentration of these charged vacancies obeys a very, very similar relationship. So, in equilibrium, the concentration of the cv-plus, these are single positively charged vacancies, depends on the concentration of neutrals. All of these depend-- the pre-exponential is the neutral concentration-- concentration of neutral vacancies-- times exponential, in this case, of the distance between ev-plus and the Fermi level over kt. Similarly, cv star-- cv-minus-- -single-minus, depends on the neutral concentration times the exponential of e Fermi minus ev-minus, just going back. So we're just looking at-- it's just depending on e Fermi minus-- so it's this distance right here that tells you what the concentration is going to be. So it's just a simple exponential type equations. You can write down very similar and analogous equations by replacing all the v's by i's for interstitial point defects, assuming that they have energy levels in the mid-gap or in the bandgap. So let's just go on to Chapter-- to page-- slide 12. Here's a picture where I've actually filled in some numbers. People think they know these numbers. They're not known all that accurately. But it's believed that the double negatively charged v-double-minus level, deep level, is about 100 millivolts, so 110 millivolts, 0.1 eV, below the conduction band. And, similarly, it's believed that this v-minus level is close to mid-gap. It's about 0.57 electron volts or so below the conduction band. People have done studies where they feel that they can measure these numbers. And so, as I move the Fermi level around, again, as I move this up, I'm going to change the distance between the Fermi level and these deep-- these energy level positions and therefore change the concentrations of these charge defects. Yeah, for people who've had solid state physics here will be saying, oh, dear. Oh, wait a minute. We have a circular problem here. If the vacancies are charged, then how do we calculate the Fermi level position? So it can become a very complicated problem. It turns out, it's very simple. In fact, the concentration-- remember, we said the concentration of these charged vacancies is always in the 10 of the 12, 10 to the 14 range, much smaller than the concentration of electrons due to the dopant atoms. Our doping is usually always over 10 to the 14. We typically dope 10 to the 15, 10 to the 16 orders of magnitude above that. So, in fact, the Fermi level position, what we do is we set the Fermi level just by the dopant concentration, just by the amount of arsenic or the amount of boron. That fixes Fermi level. And then we calculate where the vacancies are just based on those equations on the previous page. So there's a lot of simplifying. And, in fact, if we go to page-- I just gave away the answer. But, on slide 13, that's exactly what we're saying. The concentration of these neutral vacancies and interstitials is so much smaller than either ni at typical processing temperatures or doping concentrations, that we can do a very-- the amount of electrons and holes that are actually bound to these guys is always negligible compared to the total number of electrons or holes present in the silicon that are there either due to thermal activation just due to ni or due to the extrinsic doping. So here's how we simplify the charge defect calculations. First, you're given the donor or the acceptor concentrations in the crystal. That is a given, typically. And we calculate the Fermi level in the usual way from solid state physics. We just ignore the fact that the vacancies and interstitials exist. So we use these simple-- we use these equations. In fact, here's the equation I wrote up on the board earlier. n equals ni e to the exponential of the distance between the Fermi level ef minus ei of the intrinsic level over kt, and a similar expression for hole density. So we use these to figure out-- and charge neutrality-- to figure out the position of the Fermi level knowing the doping. And then, once we have the Fermi level position, we can just calculate the concentrations of charge defects using that value of ef and the equations from Shockley on the different-- on the exponential dependence of the vacancy concentrations. So it's actually not that hard. So let's go on to slide 14. So another thing to realize-- this is kind of obvious, but sometimes we forget it-- is that as the doping changes, as dope the crystal more heavily, let's say, and, therefore, the Fermi level changes, not only do I change the charge states, but I, actually, the total number of vacancies changes. It's not just that I'm taking a certain fixed number and changing their charge distributions, how they're charged. So thinking about it this way, this is the expression we wrote here, this equation, for the neutral vacancy population. It's only a function of temperature, not doping, or not the Fermi level. So that's very simple. It's just an exponential dependence on temperature. But the total vacancy population is the sum of the neutral vacancies plus the sum of adding in all the charged defects, cv-minus, cv-double-minus, cv-plus. If I increase all these charge defects in their concentration, then the total concentration has to go up, basically, just thinking about it that way. So, in fact, as I thinking about it-- you never thought about this-- when I'm-- as I add arsenic to a crystal, as to silicon, I make it more n-type, I'm increasing the amount of electrons, free electrons, that are in the crystal. That's why we add dopants. I'm also increasing the total number of vacancies, their concentration, in the crystal, because I'm making it a lot easier-- it's much more favorable to form vacancies because all those extra electrons are available. So I can form extra cv-minus and cv-double-minus. So as I move the Fermi level up and down, I'm actually creating extra point defects. That's important, because, if I move the Fermi level up, I create all these extra vacancies, all of a sudden, dopants that rely on those vacancies to diffuse, they can diffuse a lot faster. And, in fact, we'll see that in the chapter on diffusion. We'll see what so-called Fermi level affects. People move the Fermi level up and down. When it's up really high, all of a sudden, the [INAUDIBLE] of arsenic goes way up. And the explanation is related to the fact that you've just stuffed the crystal with a lot more vacancies. Yeah. STUDENT: We've lost the original [INAUDIBLE].. JUDY HOYT: Mm-hmm. STUDENT: And the on surface [INAUDIBLE] about. Is this those dependencies still? JUDY HOYT: Yeah. STUDENT: [INAUDIBLE] JUDY HOYT: We are talking about going back to this concentration right here? STUDENT: Or just in, I guess, the original slide-- page 4. JUDY HOYT: OK, let's go back all the way to slide 4. This concentration here, ah, how it depends-- this refers to-- yeah, you would have to change the formation enthalpy and entropy. But what we're talking about here is the total concentration of vacancies created by whatever processes. We're not exactly saying, in this explanation, how they were actually created. We're just going to assume that these relationships exist. If you wanted to know it in detail, you can probably break it out into different origins of vacancies and interstitial, but we don't take that into account in any of these equations, the actual mechanism by which they're formed. So we lump all of this into a single-- when we wrote down this equation right here, we lumped it into a single activation energy, all the different processes, basically, whether it's a surface process-- again, this is an equilibrium. Whether it's a surface process, or a bulk process, by Frenkel pair which it was created, this equation doesn't tell you. It's probably the sum of a lot of different processes. So it's an approximation. Any other questions about vacancy interstitial populations? It's kind of weird. And if you've had solid state physics, it doesn't seem that strange. If not, you should definitely go back and read Chapter 1 to remind yourself of these statistics. Let's go back-- jump back to slide 15. And I pulled this-- this slide 15 shows a table from Chapter 3. I pulled this right out of your textbook. And what it is, is a table of these energy level positions as best as they're known today. And first thing you'll notice is, in the vacancy levels, there's the numbers are all printed down. The interstitial levels, there's a bunch of double question marks and single question marks. So the truth is, we really don't in the case of interstitials very well exactly where these are. The interstitials, no one's ever pinned them down to be able to do electron paramagnetic resonance. They move around too much. So the vacancy levels, we have some ideas. Even these, I would take to a certain extent with a grain of salt. But they give me rough ideas. And the other thing you need to know-- so you know these distances now. So you can plug all those into the equations. The question is, where are these energy levels referenced to? In fact, we're going to reference them to the band edge. So as I increase the temperature, remember, the bandgap shrinks, I'm going to say that this distance between the band edge and the vacancy level maintains a constant, even though the bandgap is shrinking. Again, and that's an assumption. But that's-- we're going to assume the defect levels track their respective band edges. So these v-minus levels track the conduction band, and the v-pluses-- if I go back to another slide where I have a picture-- jumping back for a moment to slide 12. These guys here, the v-minus, are going to track the conduction band. So this distance, 0.11 electron volts and 0.57, is going to stay fixed even as I squeeze the whole bandgap due to-- remember, when you increase temperature, the bandgap goes down, but these distances are going to stay constant. And, similarly, these are going to stay constant with respect to the valence band edge. It's an assumption. How justified it is, it's not really clear. But that's an assumption that people make when they actually plug in into the Shockley equations. So let's just do an example since we've thrown a bunch of the Shockley equations out there. And that's shown on page-- slide 16 of the handouts. And I pulled this right out of the textbook. So you can go back and reread from the text if you'd like to do that. Here's a picture of a slide-- a piece of silicon. And this region up here in the upper right is been doped very heavily with arsenic. So it has a donor concentration of 5 times 10 to the 19 per cubic centimeter. And the rest of the crystal is boron or p-type-doped at very lightly, at 10 to the 15. And we take this pn junction, and we heat it up to 1,000 degrees in a furnace. And the question is, let's sit down and calculate, using the Shockley equations, the concentration of all the different charged states of vacancies and all the different charged states of interstitials. And let's just see, in those two different regions, what the concentration of these point defects is, here and here. Well, first of all, we go to 1,000 degrees, and you look up, in Chapter 1, you notice, the bandgap has shrunk. At room temperature, the energy bandgap is distance from ec to ev, that's 1.1 electron volts. As we go up in temperature, the bandgap actually shrinks. And there's an expression for that in Chapter 1. In fact, it's shrunk down to 0.78 or so electron volts. As a result, of course, it's easier to ionize an electron, or to bring-- create electron hole pairs. So the intrinsic carrier concentration is now pretty high. It's 7 times 10 to the 18 at this temperature. Does anyone remember, if you've taken 6012, at room temperature, what is ni roughly? 10 to the 10, right. So it's gone up by 8 orders of magnitude, almost 9 orders of magnitude, just because I've increased temperature. So those of you who work in electronics, at room temperature, ni is 10 to the 10, get that out of your head. ni is a function of temperature. And it's a pretty big number at these processing temperatures. So, in fact, in the p-type region, the semiconductor at this temperature is intrinsic because ni is many orders of magnitude larger than the boron doping concentration. So in the p-type region, the Fermi level is at mid-gap. I swore I put it here, ef equals ei. It's at this mid-gap position because the material is intrinsic. In the n region, actually, the donor concentration is still pretty high, 5 10 to the 19 is still much greater than ni. So its extrinsic. And we use the usual equation, which we just showed on the prior pages, to get the Fermi level position. In fact, this is the equation, n equals ni, e to the e Fermi, minus ei over kt. I know the temperature. And I know n sub i. So and I know n is just going to be the donor concentration, 5 10 to the 19. So I have everything I need to find ef minus ei. And, in fact, if you plug in a calculator, this ef minus ei distance here, from here to here, that's 0.2 electron volts or 0.21 roughly. So I can place the Fermi level on the p-type region as a mid-gap. And, here, on this diagram, this position right here, this dashed line, shows the Fermi level in the n-type region. And then, we've just drawn in the bandgap, the band edges here, here, and here, ec and ev. And each line here represents-- we know these energy levels. They're positioned numerically. This is 0.11 electron volts for the v-double-minus. This is 0.57. So we can draw this quantitatively, this diagram, in the p-type region as well as in the n-type region. And, here, we're showing mostly the vacancies just for illustration purposes and, here, mostly the interstitials. But you could write down the interstitial's levels in the p-type region as well just to show the picture. So if you sit down and then you plug all these numbers into the Shockley equation, this is what you come up with and on slide 17. And, again, this table is taken directly from your textbook. And all it is, is tabulating, in the p-type and the n-type region, the concentration of all these interesting quantities. For example, let's just take the p-type region. The doping, we were told, is 1 times 10 to the 15. That was a given. ni is 7 times 10 to the 18. That's the same in both sides because that's just due to the temperature. The neutral vacancy concentration, which is calculated from that simple exponential we gave earlier, depends only on temperature, and it's about 5 times 10 to the 13. And it's the same in the p and the e region. Again, the neutral vacancies doesn't depend on the Fermi level. Here's where the differences start coming up. In the p-type region, look at the concentration of v-minus. It's about 2 times 10 to the 14, almost an order of magnitude larger than the neutral. So here the neutral and the v-minus are dominating here in red. Everything below on the chart, v-double-minus, v-plus, these are all very small, small numbers. And that's because of the Fermi level position basically. And, again, the concentration goes-- depends on the position of the Fermi level respect to these vacancy levels. So here these guys dominate, the v-minus. In the n-type region, what's in red, I've highlighted what dominates. Actually, now, both v-minus and v-double-minus in the heavily doped n-type region dominate. And not only do they dominate, look, they're so much bigger. If you were to add up all these numbers, look at the total vacancy population now is well above 1 times 10 to the 15, much larger than it is in intrinsic material where it was here in the 10 to the 14 range. So, again, that just shows, in the n-type region of the crystal, there's a lot more vacancies total. So, in that region, if you have a dopant that's diffusivity depends on the vacancy population, it's going to go up its diffusivity. Clearly, the different charge states dominate in the different regions. And I'm not going to go-- we won't introduce right yet, but when we get to the Chapter 7 on diffusion and Chapter 8 on ion implantation and even in the oxidation models, these numbers, these concentrations of point defects will be very important. For now, you just get an intuitive feel. As I go n-type, oh, OK, the v-minus and the v-double-minus go way up. As I go p-type, these-- well, this is intrinsic. It's not actually p-type. In p-type, if I made it heavily p-type, these numbers would go up, the v-plus and the v-double-plus concentrations. So I'm going to leave the vacancy. When we switch to slide 18, I'm going to leave the vacancy and interstitial. You'll get a chance to look at some of those in your homework. But that's an example from the text. I'm going to leave that for now. And we'll, as we go along each process we talk about, we're going to come back to the importance of vacancies and interstitials. There are two other things about the crystal that are very important that I wanted to talk about and discuss that's discussed in Chapter 3, and that is oxygen and carbon. We said yesterday, or on Tuesday, that the Czochralski growth process inherently introduces oxygen from the quartz crucible and a certain amount of carbon. And we can't get around that. And these are typical numbers, oxygen about 10 to the 18, carbon concentration about 10 to the 16. And they both have some important effects. For example, for oxygen, these are three effects that are listed. This is a simple-- or it's an equation you learn from 6012, or if you've taken an electrical engineering class. If you haven't, just take the equation as given. It's equation for vth, the threshold voltage, as a function of various physical parameters. The vth, that is very important to the circuit designer. It tells you, and to the device person as well, it tells when you apply that gate voltage, the inversion layer forms and you start getting conduction. So circuits behave very badly if your threshold voltage varies from device to device all over the place. There are-- all these depend on things you can control, like the doping, and the wafer, and things like that. But this one term, this last term, qqm over cx, where q sub m is the concentration of mobile ions, be it, say, sodium, potassium, calcium, whatever, that exists in the oxide are right at the interface between the oxide and the silicon. And this, we write as the number of charges per square centimeter. So it could be the sodium concentration in sodium atoms per square centimeter. Well, you just plug some numbers in here, put in an oxide thickness of 10 nanometers, which is reasonable for these days. Then I calculate a vth change or a shift of 0.1 volts by having qm be just about 6 times 10 to the 11, not very much. That's only 10 ppm, 10 parts per million. But, already, that amount, 6 times 10 to the 11 sodium atoms, can shift the threshold voltage in one device to the next if it varies by that much, by a tenth of a volt. And that's a big deal to either a device person or a circuit person. So you really need to-- that's why you'll see the concentrations of these things in the ITRS have to be 10 to the 10 or below because they want it to be a small. They don't want any chance of a 100-millivolt threshold voltage shift. That's just way too much. So that's one reason for people making logic devices. People in memory devices are even more paranoid, even more scared of contamination. And here's an example, number two on slide 25. This is schematically a dynamic random access memory. So this is a DRAM that all of us have in all of our computers. And what it consists of is a transistor and that access this-- basically this charge stored on the analyst capacitor. If charge is stored on there, then it's a certain logic state, a one, if there's no charge, it's a zero. Well, it turns out we have to refresh this periodically. We have a certain amount of charge stored in that node, but it leaks out over time. And, typically, the DRAM has to refresh that bit of information every few milliseconds. But if it's leaking out really fast, it has to refresh it more often. And you suck down your battery power and then the chip doesn't work very well. So, it turns out, a refresh time of several milliseconds requires a lifetime. This is a measure of the purity of the silicon, a generation lifetime, that goes to tg that goes 1 over a number sigma, which is given by this number, times the thermal velocity, which is also a constant, times nt. n sub t here is the concentration in per cubic centimeter of traps. So if I know sigma, typically the cross-section for a typical deep level is this number, 10 to the minus 15, I know the thermal velocity, I can solve for nt. This requires that I have less than 10 to the 12 per cubic centimeter or 0.02 part per billion of these traps in the semiconductor. So it's extremely important. If you have more than that, your charge is going to leak out faster and your DRAM is not going to work. So it's very important to keep these impurity concentrations low for that reason. So what are these traps associated with? Well, it turns out that trap density scales with the density of certain bad actors. And we'll talk next time about who these bad actors are in silicon impurities, such as heavy metals, gold, transition metals like copper, iron, nickel. A lot of these turn out to be traps, deep levels, that destroy the lifetime. But, anyway, just to give you a rule of thumb, you need to be in the ppb range or lower. In fact, it turns out the requirements are a little more severe than that simple equation I just did because these elements also have a very bad tendency to accumulate in heavily doped regions, like in the pn junction or source drain, they tend to be-- we'll talk about why they have-- they like to be in those regions. So that's the worst place for them. So that's why we even require lower carbon concentrations-- or iron. Back here on page 25, I said 10 to the 12 per cubic centimeter. In fact, people, if you look at the NTRS or ITRS, it'll ask for much less than that. So the truth is, it's impossible to keep our wafers that clean. No matter how careful we are in the lab or in the process fab. So what we do, what people do in manufacturing anyway, is to use a process like gettering, so that it will try to remove these unwanted iron, gold, and copper from the regions of the wafers where the actual devices are located. So this is a special type of process we do, usually done at the beginning of the process. We do something to the wafer-- or maybe in the middle of the process-- to try to attract these to another location. So we'll talk about those three types, three levels. First is just the clean room, how do we keep the clean room clean? Second is how we clean the wafers? And third is how we do gettering? So, for today, I'm just going to do this level one control. And hopefully some of you have seen clean rooms. But, basically, what we do in a clean room is we try to keep the air free of particles. And we do this in practice by these tremendous filters, high efficiency filters, that really trap all particles. So the air coming into the clean room comes from the top. These high-efficiency filters remove all of the particles, and the air goes out the sides. And it keeps washing the air continuously. We have this constant flow of air taking particles and removing them down to the floor so that they don't get on your wafers. And, in fact, we actually quote the class of a clean room, where the room gets cleaner as the class goes down. So this is a plot of the total particles per cubic foot in the air of a clean room as a function of the size of the particle. And the line, each line is parametrized according to a number. That's the class clean room. So, for example, here in MTL, the clean room is nominally rated, at least the integrated circuits lab, at about class 100. So that's this line right here. So a class 100 says, here we are at a micron-sized particle, you can read off roughly the number of particles per cubic foot in the range of 50 or something like that. Gives you an idea of the number of particles per cubic foot. If you have more than that, then you move to a class of 1,000 or 10,000. So that's a dirtier room. The higher the class, the lower the number, like a class one, that's the best-- that's the cleanest type of clean room. Here's an example I just-- here on slide 28-- of just some pictures of different clean rooms. And, basically, we clean the factory environment-- I mentioned these HEPA filters, these high-efficiency particle filters, that recirculate the air constantly. So it's being filtered all the time. We put people in little suits called bunny suits. Here's an example of a student in a university R&D lab wearing a bunny suit. So you try to cover yourself. People are the worst source-- the biggest source of particles. We shed skin constantly. We can't help it. It's part of our natural thing. Even if you don't have dandruff, it doesn't matter. You're shedding constantly small particles. So we try to keep as much as possible our clothes contained in these clean suits. This is in a university-style R&D lab. In a manufacturing, here's an industrial fab. This is a picture I pulled off the Intel website a while ago. Actually, we sometimes put the workers in spacesuits so that not even their face is exposed so they can't spit particles, so they can't breathe particles onto the wafers. So here's a worker in a spacesuit. And he's breathing through a tube. We, also, besides the air, we have to use ultra-high purity water. We have to filter the water so there's no impurities in the water, no ionized, no sodium, no potassium, no particles. And all the chemicals we use are produced in special chemical factories where they reduce the amount of sodium, potassium, iron, and copper, all that from the chemical. So everything is ultra-high purity, all the gases are purified. So everything that goes onto that wafer surface is purified. And we use a lot of protocols in manufacturing. Oh, by the way, these days, in 300-millimeter fabs-- so this is a 200-millimeter-- 300-millimeter, which is the next generation, fab they're going to actually even reduce the number of people walking around, even in spacesuits. And what they do is they use robots. Here's a robotic. I took this off of a commercial website. This is a track near the ceiling of a 300-millimeter fab. And these little pods, these little square or cubic pods, hold wafers in them. And these travel. These pods travel on the track. They automatically go down to a piece of equipment, open up, and the wafers get loaded. So the wafers actually never get exposed to the clean room environment. They're kept in pods the entire time. They go from a pod into a vacuum system or pod into another tool. And so, they minimize the exposure this way. So we even have now a lot of robotics just to clean up the process even more. Slide 29 I talk about-- we'll continue this in more detail next time. But I talk about wafer cleaning. We do introduce onto the wafer a certain amount of impurities ourselves. In the photolithography process, remember, is that organic material, photoresist, it turns out, is a contaminant. But we have to use it because it happens to do-- it does photolithography. But we have to remove it every step because we certainly don't want it on the wafer when we're doing any high-temperature processing. So after we do resist strip, we typically then have to clean the wafer using a special clean that we'll talk about next time in RCA clean. And then we do the process step. So the process of semiconductor fabrication consists of pattern something, transfer the pattern, get all the resist off, do a clean, and then maybe go into a high-temperature step, and repeat the whole thing again. So you can have tens, five, 10, who knows how many cleans that go on depending on the number of mask levels and the number of high-temperature steps. So why are we trying to remove these things? And, I already indicated, and what we're trying to remove these organic films or trace organics? All of the organics are bad because they can cause leakage in the gate. And they can act like a mask. They prevent you from cleaning the underlying surface. So you need to get organics off. Typically, a first step of any cleaning is try to remove organics. Particles, well we already-- that's fairly obvious. They can cause defects. Particles very often carry metals in them because you get particles from the wafer handling equipment. Most equipment's made of stainless steel, which is what? It's iron and nickel. So it's a very-- particles are bad because they tend to carry metals into the wafer. Alkali metals, I've mentioned, sodium, potassium, calcium, these move easily in oxides. They can shift the threshold voltage, like we saw in our calculation, and they can cause reliability problems. And we already mentioned the transition metals having bad properties. They reduce the carrier lifetime. They reduce the mobility, which is an important property. They're also bad because they can diffuse very easily into the wafer. And they can roughen the surface. So these have a very, very bad-- among CMOS people gold, copper, and nickel, iron, very bad reputation. So, these, we take major organic contaminants, like photoresist, and we typically remove them in a solution of sulfuric acid or oxygen plasma prior to do any real refined cleaning, like RCA clean. And then we remove trace organics for the RCA clean along with metals. So that's the philosophy. And we'll talk about that next time. This is just a picture of an RCA cleaning bench. And the next time we'll go through how it works. It's a standard process. And, again, it removes trace organics, not heavy organics, trace heavy metals, and alkali ions. And there's very specific chemicals that are used, which we'll discuss next time too-- on how that's done. So let me just summarize what we talked about-- a wide range of things. We talked about native point defects, these vacancies and interstitials. They can be either neutral or charged. The charge defects obey the same statistics as shallow donors and acceptors. So as we move the Fermi level and we dope the crystal n-type or p-type, I can change the number of the concentration of these defects. The good thing, though, is they're still low enough in number that I don't need to use them when I'm calculating the Fermi level position. So that makes it easier. There are both v-type vacancy type and i-type defects. And we'll talk about how they play a very important role in understanding the processes. And I introduced the idea of contamination control. That's in Chapter 4. And we'll discuss that in greater depth next time. OK, so that's all I have for today. And, on Tuesday, remember that your first problem set, your first homework set is due. Thank you. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 13_Ion_Implantation_and_Annealing_Physics_of_E_Loss_Damage_Introduction_to_TED.txt | JUDY HOYT: It's very simple, but doesn't fully capture all of the reality. The simplest analytic expression that captures profiles reasonably well is the Pearson four, which has four moments, and those moments are generally tabulated in tables. We also presented the idea of Monte Carlo or numerical solutions, for example, Monte Carlo simulation, which is quite accurate. And another thing called the Boltzmann transport equation solution. The other thing we mentioned is that there's this process called ion channeling. Ion channeling itself is quite a challenge in calculating profiles. Calculating the profiles pretty accurately into amorphous solids, or amorphous materials, is reasonably routine. But once you have to model ion channeling, it gets a little tricky. Today I want to cover some things that we didn't get a chance to talk about. We never even talked about, last time, the physics of the modeling of the ion implantation process, that is the physical energy loss mechanisms of the ions as they traverse. We want to talk about that. Once we've covered that, we want to talk about damage. How do we model the damage that takes place in the silicon substrate, and how do we anneal it? And then I'll give a very brief introduction to transient enhanced diffusion. The next lecture, next Tuesday, is going to be dedicated to talking about TED, but I'll at least briefly introduce the topic. OK. Let's go on to slide number 2. The right hand side of this slide is just a schematic picture of a cartoon where you're shooting an ion into a silicon surface. And all these little open symbols here represent silicon atoms on some kind of a lattice. You see an ion, which is this dark bullet here. It comes in and it hits the target. And when it does, it may have some kind of a nuclear collision here. It may collide with the target atom and be deflected. There'll be some energy loss there. Then the target atom itself is going to be recoiled and will have some energy imparted to it. But that deflection is going to then knock the ion a little bit into a different direction, change its direction. And interestingly, you see it going down a path. And right here, after this collision right here, interestingly, the ion got knocked into a channel, is what we're supposed to represent. You see, its direction got changed by this nuclear collision and now it's working its way down a little bit of a channel. Still losing energy, perhaps at a slightly different rate. And still with each collision creating a silicon recoil that's been knocked off its lattice. So just a very schematic picture. So again, we're bombarding the wafer. The energy of this incoming dark-colored ion is typically anywhere, well, some of the lowest energy implants being done today are one kilovolt, or maybe even slightly less. That's very low, but that is being done. And some of the highest energy implants for making, in bipolar they make selectively implanted collector regions that are quite deep. They can work in the 1 MeV range. So there's a very wide range that people are eyeing at playing. Typically, implants are between something like 10 to 100 keV, but people do use the full range. Now look at the binding energy of a silicon atom on the lattice. That's only 15 electron volts. So you can imagine that we're coming in with a species that has thousands of times more energy than the binding energy. So obviously, it's quite possible to have billiard ball-like collisions, where you knock these silicon atoms off of their lattice sites. So the ions collide, and they collide elastically. And they have, as a result, we have ion deflections, which I just showed you here, where the ions change their direction. And when you collide, of course, you lose a certain amount of energy. The ion loses some energy, and it displaces silicon atoms, which we end up calling recoils. In addition to this collision, this sort of pool table type of collision of billiard balls, the ions can also suffer sort of an inelastic drag force from the target electrons. And so this leads to electronic stopping, where you lose ion energy and it actually heats the lattice, to a certain extent. Because they're in this median that has electrons in it. Eventually, this ion and all the ions come to rest after they've lost all of their energy in both collisional processes and in this drag force. And channeling, as I mentioned, we talked about last time. Along certain directions, ions can travel in the crystal with very few collisions and little drag. So they can go deeper than they would otherwise, and that's tough to model. So if we go to page 3, or slide 3, how do we model these range statistics? Well, what we do is we write down the total energy loss during an ion trajectory, and we write it as a linear sum. So we treat these two processes independently, nuclear losses and electronic losses. So we write the rate of energy loss as a function of distance, as this equation, de by dx. It's a negative number, because you're losing energy. N, where n is the target atom density, atoms per cubic centimeter, times the sum of these two quantities s sub n plus s sub e. Again, n is the target atom density. S sub n is a function of energy. It's the nuclear stopping power. And it has units of energy times area. So it's eV dash centimeter squared is a typical unit. And s sub e is the electronic stopping power, again eV centimeter squared. you can see the units work out. If s has units of eV centimeter squared, you multiply it by atoms per cubic centimeter, you get a certain amount of eV per centimeter, or energy loss per unit length in going into the crystal. The thing to remember is that these functions, these are in general these stopping powers are going to be a function of the energy. That is, the rate at which you lose energy is a function of how fast you're going, of how much energy you actually have. If we know s sub n of e as a function, and we know s sub e, the electronic stopping as a function of energy, then you can simply compute the range by doing this integral. So you integrate. The range would just be the integral from 0 to e, e naught. Whatever your incoming energy is, your ion-implanted energy, say 100 kilovolts of d e divided by the sum of the two stopping powers. And here again, the density is a constant number. It's just been pulled out. So you can find the range mathematically if you know the physics of these two stopping powers. So that's where a lot of the modeling has been done in understanding the physics of s sub n s sub e. And we'll first talk about, on slide 4, we'll talk about nuclear stopping. Above a certain energy, so it's about half a kilovolt, or 500 eV. Now be careful, because we're approaching this range for some of our very lowest energy implants, making very shallow source drain extensions. People are using quite low energies. But if you're above this, it's reasonably valid to model nuclear stopping as a classical two-body collision between a silicon atom that's sitting still and an incident ion that's coming in with some velocity. So it can be modeled as, just like on a pool table, two balls colliding, using the conservation of energy and the conservation of momentum principles that you learned in your basic physics class. Now, why does this have to be above a certain energy? Why is that the case? Well, it turns out, if the energy is lower than this, or significantly lower, then what happens is, as this incident ion comes in, instead of looking like a billiard ball, a hard sphere model, this ion has time to sort of hang around the target atom, and the interaction is not just like a Hartsfeld collision. You have other types of interactions. You can have multiple interacts with a lattice, maybe, in the sense of it's going slow enough that as it passes by the nucleus, it has different types of interaction. So if your energy is low enough, these models break down. But the good thing about ion implantation, most of the time you're well above that threshold. So, as you remember from your basic physics class, we have an incident ball coming in, hitting another ball that was originally at rest. This incident ion is going to be scattered at some angle. And you can figure that out once the interaction potential, v as a function of r, r being the distance between the two bodies. Once that's known, then you integrate that along the path of the ion. You can calculate the scattering angle for the collision, and you can do it once and then look in a lookup table. So you apply basic physics of conservation of energy and momentum, you can get these numbers. Again, the interesting thing about ion implantation, of all the processes we talk about in this class or anyone does, in IC Fab, it's the only one where, from first principles physics, you can actually predict something. You don't have too many free variables the way you do in some of these chemical processes. So what it boils down to as far as nuclear scattering goes, is figuring out what is the appropriate interaction potential, the nuclear scattering potential? Well, if you think about these two bodies, an ion and a silicon atom with the iron approaching it, the first thing you might think about is, because you know the nucleus is charged, and it has a certain charge, which is the charge on the electron q times it's z number. OK. So you might imagine that the simplest type of potential interaction between these two charged nuclei is just the coulombic potential, which is just given by the z numbers multiplied together divided by r. So it's a 1 over r type of good, old-fashioned Coulomb potential. That's a little bit too simple, though, because the atom is just not a nucleus. An atom has an electron cloud around it. So the electrons around the target atom nucleus actually ends up screening the core potential of the nucleus, to a certain extent, from the incoming ion. So it's not, this is a little bit oversimplified. So what people do is they put in a generalized function, f, which they multiply by the classical Coulomb unscreened potential, and then this becomes the screened potential, v as a function of r. F is some generalized function of r over a. Again, r is the distance between the two nuclei, and a is a thomas-fermi parameter, which is related to the Bohr radius. It's some function of the Bohr radius times a function of the z numbers. So it gives you, a is some measure of the, if you want to call it the size of the atom, in some way. And r is, of course, the distance between the two, the colliding ion and the silicon atom. So a very common potential that's often used is an exponential function. You assume that the electrons exponentially dampen out the Coulomb potential, and the people use what's called a thomas-fermi potential. So this function here, shown on the third equation on slide 5 is the thomas-fermi potential for ion implantation. So it's got a 1 over r dependence times an exponential e to the minus r over a, where again, a is this thomas-fermi screening parameter. So that's potential. That's one that was often used in modeling of ion implantation. So it turns out that when the screening function is, instead of using exponential, if you use a screening function f just to be a over r, then it turns out that if f is just a over r, and this just the potential just goes like 1 over r squared. That's another way of doing the screening function. In that case, the nuclear stopping power can be approximated by a constant. So it turns out that this number, and if you want to see that derivation, you can go to this article by Gibbons back from the proceedings of the IEEE, oh, back in 1968. And in fact, they derive that it can be a constant number. That depends just on the z number of the ion, and the mass of the ion, and the z number and the mass of the substrate. So this is a simple equation you can use if you're stuck on a desert island. You need to know nuclear stopping power, you can just program this into your calculator. And I just did a simple calculation for phosphorus, which has a z number of 15 and an atomic mass of twice that, or roughly a 31. Silicon has a z number of 14, a mass of 28 amu. You plug these numbers in and you do the calculation, I get an s sub n number of about 550 kiloelectron volts per micron. So that's how much energy I lose, if phosphorus going in, assuming it's a constant linear number, 550 keV per micron, just gives you a rough idea. And that's again, assuming a screening function is just a 1 over r type of screening, due to the electrons around the nucleus. And in fact, if you go on to slide 7 of your handout, I took this plot from Mayer and Lao's book. I referred to Mayer and Lao earlier in the course. In fact, handout one has the full reference for his textbook. And what he's plotting here is de by dx. So again, that's a keV per micron. So it's a stopping power. Or, if you want, you can change both the energy unit and the length unit. It's equivalent to eV per nanometer. I prefer to think of keV per micron, but it's just either way. And it's on a log log plot, and the x-axis is the energy of the ion. And there are two different types of stopping powers shown here. Right now, all we've talked about is nuclear, but let's focus on this dashed line here for phosphorus. That starts up at 10 keV and it's about 500 or so, a little over 500 keV per micron. And then, as you can see, in reality, the nuclear stopping power given by this dashed line is actually going down as you go to higher energies. But lo and behold, the number we just calculated in our simple calculator, again, assuming a thomas-fermi screening function of 1 over r, comes out at low energies to give you pretty close, asymptotically, to what people calculate with more sophisticated screening functions. Indeed, my calculation here of 550 keV per micron at 10 kilovolts is pretty darn close to that dashed line for phosphorus. So it's interesting. And you can check it out yourself for arsenic. Again, you know the z number and the mass, you can check it out to see how accurate, at low energies, it asymptotically approaches. So what do we notice? Well, obvious, arsenic is a very high z number, a high mass element, and it's nuclear stopping power is quite large. It's on the order of 1,000 keV per micron, something like that. Phosphorus is being less, and boron nuclear stopping even lower, an order of magnitude lower than for arsenic. The other thing on this plot, besides these nuclear stopping powers, the ones sub n are nuclear, these straight lines that increase with energy, these are the electronic stopping powers, and we'll talk about that next. This is the type of stopping that's due to the drag force of the electrons in the substrate. So let's go on to slide 8 and talk exactly about that, this sort of non-local. Why is it non-local? Well, the nuclear stopping power is local in the sense of, you're having a collision. The ion is coming very close to the atom and the substrate. You're having a deflection. So it's an actual physical collision, and they interact by this coulombic force. This is a non-local phenomenon called non-local or local electronic stopping, and there are two ways of viewing it. You can imagine some kind of drag force caused by the fact that I have a charged ion coming in, going at some velocity. So it's as plus, arsenic plus, or boron plus. Boron minus is in this sea of electrons, the sea of electrons where they come from. Well, every atom in the crystal has electrons associated with it. And in fact, there are covalent bonds. There are four covalent bonds for every silicon atom, with eight electrons participating per atom. And these covalent electrons, essentially, can produce a retarding effect on the ion going through it. So you can think of it as sort of a retarding field. Imagine the silicon or the substrate that's some kind of dielectric medium. You have an ion coming through it at some velocity. This blue circle was meant to represent, say, the boron or the arsenic ion. And there's this retarding electric field. So the interesting thing, the thing you need to note, the main point is, this is a dissipative loss mechanism, but it doesn't change the direction. The electrons are way too light. They're very, very light. They can retard or slow down the ion, but they're not going to change its direction. There's no deflection as a result of this. So, unlike nuclear scattering. So that's one way of thinking of it as a drag force in a dielectric medium. The other way is you can imagine a quote unquote collision. Now, just be careful in the word collision, the use of it, with electrons around the atoms transferring some momentum to those electrons from the electrons from the ion itself. So you can imagine, here's this target atom. It's got this electron cloud around it. And it transfers momentum and actually results in locally slowing down this ion, basically, reducing its velocity. Again, there's no change in this mechanism, no change in the direction of the incoming ion when this process happens. So both of these don't change direction, and both mechanisms are related to, or involve, the speed or the velocity of the ion. OK. So if we go to the next slide on slide 9, to first order, people have found they can write the electronic stopping power as some constant times the velocity. So this drag mechanism, the keV loss per micron, due to electronic stopping, increases directly proportionally to the velocity of the electron. And what is the velocity? Well, the velocity just goes like the square root of energy, right? So you can write electronic stopping as some constant, k times the square root of e. And in fact, k has this, very roughly, you can approximate k by about 2 times 10 to the minus 14th square root eV centimeter squared, as shown up on the top of that slide. So this is an approximation, but to first order, it seems to work. I should note here, I'm noting in the upper right corner, that a lot of the improvements in ion implant modeling over the last 5, 10 years, or however long, have actually come from a better, more accurate treatment of electronic stopping. Nuclear stopping is very much nuclear physics, and that's been known for a long time now. But this electronic drag force is a little bit more mysterious, a little bit more difficult to model. And this is where a lot of the improvements have come in the literature. And particularly, this affects light ions like boron, for which most of this stopping is actually electronic. Boron isn't very heavy. It doesn't experience that much nuclear stopping. Most of what slows down a boron ion implanted into silicon is electronic stopping. So boron profiles have become more accurate, more accurately modeled over the last x number of years, because people have a little better models for electronic stopping. So if we look at this plot on slide 9, this is the total stopping power. It plots both the nuclear and electronic. So again, the units are keV per micron in the vertical axis, and keV on the y-axis. And these are the same plots I just showed you, they're just different color. Here's arsenic nuclear stopping, phosphorus nuclear, boron nuclear stopping, and here is the electronic stopping. And again, it's just basically proportional to the velocity. There's only a small dependence, and we don't even show it here, on the type of ion. So we're just writing this as this black line. So interestingly, what you see, an interesting energy to point out is the energy at which these lines cross. And that's called the critical energy, e sub c. And at that point, the nuclear electronic stopping are equal. Beyond that, the electronic stopping has actually taken over. It's much larger. Because again, the nuclear stopping power is going down as you go faster, or increases as you go slower. But the electronic is going the opposite direction. So boron at 17 keV, above that or below that energy, basically, nuclear stopping will be important. Above that energy, which is a lot of our implants, boron is pretty much being stopped by electronic stopping. Phosphorus is about 150. So any energy less than 150, you're going to be dominated by nuclear stopping. Above that will be electronic. And arsenic is almost always dominated, at least in the beginning of its path, by nuclear stopping. The interesting thing about this, though, think about an individual ion. As an ion comes in, of course, it comes in with a lot of energy, 10 keV, 100. But by the time it stops, it's got 0. So every ion has to traverse. As I first come in, let's say I started coming in at 10 keV, or rather, 100 keV, and I'm right here. So as I'm coming in here for boron, you'd say at 100 keV, well, electronic stopping is dominant. That's true. But as the boron ion slows down, it walks down this curve and walks up this point. And when it gets to very low energy just before it stops, in fact, nuclear stopping always takes over. Because at low energies, you really can have a lot of billiard-ball-like collisions. So that's why at the end of the range, back at the depths of the implant that's near the end of range, there's a lot of nuclear stopping that goes on. And we'll talk about what impact that has for damage profiles. And here's just an example on page 10 on different type of plot. That was a log log plot. I've plotted these nuclear and electronic now on linear axes, just so you get a feel for what they look like on linear. So this is de by dx. And this is, well, it's actually a square root of energy. That helps linearize the, this straight line, of course, then, is the electronic stopping. And this other line here, that it peaks at some energy, E1, and then starts to decrease down, that's the nuclear stopping. So on a linear scale, you can actually see what it looks like. And again, I took this from Mayer and Lao's textbook. In general, the nuclear stopping dominates, as we said, at low energy towards the end of the range. And that's the location in the substrate where the nuclear collisions are going to produce most of the damage. So we call that end of range damage. At very high energies up here, where I'm up here, particles travel very quickly. They have less time to interact with the nucleus. So nuclear stopping is not as important. They have less interaction time, and so they tend to be dragged down. Nuclear stopping is going down with energy, where electronic is increasing. OK, so that gives you an idea of some of the physics of the loss mechanisms. So let's go on to slide 11. So let's say I have these nuclear stopping powers as a function of energy, and I have the electronic. How do I get from that to a calculated profile? Well, we know we just simply need to do this integral if you want to compute the range. Or you can, directly from Monte Carlo simulations, you can actually simulate that billiard ball and those electronic stopping processes. Many years ago, three folks, Lindhard, Scharff, and Schiott, actually took these equations, put in nuclear and electronic stopping powers, and computed the moments of distributions, given s sub n and s sub b. And in fact, that so-called LSS Theory generated these tables, I think we showed these tables last time for common dopants in silicon, generated tables of the first few moments. Maybe the first three moments, that generated rp, this delta rp, or the standard deviation sigma, and the skewness gamma. They are tabulated and they were calculated originally by LSS, or they've been fit to experimental data. But you can calculate these moments from first principles physics estimates of s sub n as a function of energy and s sub e as a function of energy. OK. So that's basically how people do the calculations. What it boils down to, the calculations boil down to how accurately do I know the nuclear stopping power as a function of energy in electronic stopping? If I know that, I can use a computer, you can regenerate the old LSS statistics, or you can actually use a computer to do Monte Carlo simulation. Because once you know those stopping powers. Given that, OK, so we can figure out where the ions end up in the lattice. Now we want to know about, what kind of damage does this incoming ion coming in at a certain energy do to the lattice? So imagine I have 30 kiloelectronvolts arsenic which is coming into the silicon lattice. And if you look that up in your table in your textbook, you'll find it has a range of 25 nanometers, or about 250 angstroms. So in 250 angstroms, actually, you can figure that out, that is equivalent to roughly 100 atomic planes. It is about 0.25 or 2 and 1/2 angstroms per plane, interplanar spacing. So you can imagine this arsenic ion is coming in. It goes through about 100 atomic planes. And how many, so in fact, you can think of this 30 keV arsenic ion is coming in like this. This squiggly line is meant to represent its path. Here's where it ends up. And you can think of a cylinder around that, sort of a cylinder that represents the region, which is damaged, because it does a lot of nuclear stopping and you have a lot of recoils generated until it comes to stop. And in fact, there's a simplified formula, the kp formula, that people use to figure out, roughly, the number of displaced particles. So that'd be the number of displaced silicon atoms created by an incoming ion. And basically, in simple, you can imagine it might be related to the energy of that ion, which is 300 kiloelectron volts, so that's 30,000 electron volts, and divided by the displacement energy. Well, we use a factor of one half in front of that. But the displacement energy, we said, was 15 electron volts. So, just order of magnitude estimate, by comparing the incoming energy to the energy it takes to displace an atom from the lattice, we're talking about maybe 1,000 recoils of silicon are created by this one arsenic atom. That's quite a bit. So you're essentially doing little ion implants inside the substrate every time an arsenic atom deflects silicon. It creates what they call silicon knock-ons. The silicons are knocked on, and they themselves have a certain energy, and they can do more damage. So it creates this sort of cylinder-like region of damage, just due to one ion. And in fact, that was a very schematic cartoon that we just drew with the computer. This is a little more sophisticated on slide 13. These are actually what's called molecular dynamics simulations, where people build in the computer. And of course, this is done at Lawrence Livermore National Lab, would have huge supercomputing capability. They build up a model of the silicon lattice, and they actually model the physics of an incoming atom or ion. Here's a 5 keV boron ion coming in. This ion implanted into the substrate. And they actually follow a whole bunch of ions and actually look at what happens to these displaced recoils. And each little snapshot picture here, I apologize, it didn't come out very well when we copied it, is a snapshot in time. So the first little cube is supposed to represent a region of the silicon where, after 0.1 picoseconds. So this ion has come in, and just at the time of 0.1 picoseconds, there's this little cloud here. You can see this little cloud. That cloud represents all the displaced silicon atoms. So they're actually following all these atoms displaced from their lattice sites. And then at five times that time, so at half a picosecond, look at how that cloud has grown. It's gone in deeper, and it's expanded sort of laterally. And then, interestingly, at 10 times that again, roughly, at 6 picoseconds, the cloud looks a little bit smaller. It's reached some size, not that much different from its first size, but why? Well, what's happened is that some of these silicon atoms that were knocked off have found that vacant lattice sites. So you're creating ions of silicon atoms that are displaced, but they find holes in the lattice where they can sit. And so this becomes sort of what the off-site silicon atoms look like after about 6 picoseconds. So people actually do try to simulate this from first principles physics as well. So that's a more modern kind of calculation. On slide 14, I'm showing you, maybe, a little bit more traditional or old fashioned calculation, what people used to do in the 1980s. In fact, this was taken from a book called The Handbook on Semiconductors on Ion Implantation, published back in 1980. But what it is, rather than that following every atom and figuring out what people do, is they found a way to calculate the energy deposited into nuclear processes. Again, know the nuclear stopping power as a function of energy. So as we integrate those equations in depth, we can figure out, at any given depth, we know the energy of the ion, we can integrate that up and figure out how much nuclear stopping is taking place at that depth. And therefore, you can think of how much of the energy loss in keV per micron or eV per angstrom is there at a given depth. And here's a picture of that calculation or a plot, its so-called damage density. So it's the amount of energy that is lost at a given depth due to nuclear processes, and it has the units of eV per angstrom as a function of depth. And this is for a particular ion implant. This is boron 11 being implanted into silicon at 100 kiloelectron volts. And you can see the damage energy. The damage energy density has a certain profile. And this solid line represents a calculation where it includes the silicon recoils, that is the silicon ions that are generated. And the dashed line is when we have only the primary ion. So there's not much difference in this case. But here's an example of boron silicon. It's a relatively light ion. And the interesting thing that it's plotted here, is as a function of x over rp. So remember, rp is going to be roughly close to where the peak of the boron. So the boron would peak here at 1. So the range of the boron atom is actually much greater than the range of the average silicon recoil. That's because the boron is light, so it can't push the silicon that much deeper in. So actually the damage density for silicon recoils doesn't contribute that much. So you can see that these two, dashed and the solid, look pretty much the same. But interestingly, where does the damage peak? Pretty close to rp. In fact, it's just at maybe 80% of rp. And that's a good rule of thumb you can use if you want to know, all right, I'm going to ion implant something into a silicon substrate. Maybe I don't know how to calculate the damage energy density but I can know how to calculate the profile of that ion. Where is the maximum damage? Well, it's close to rp, but in fact, it's just shy of rp. So if you want to do a lot of damage, or usually you don't, or you want to minimize, you can figure out, just below rp is where you're going to peak in terms of doing the most damage to the substrate. Here's an example on slide 15, a different situation. Now this is the heavy ion. The same kind of calculations published in Gibson's book, Handbook on Semiconductors Ion Implantation Volume. Again, damage energy density in eV per angstrom, again, versus x over rp. But look at this. This time, the solid line, now again, includes recoils and you get this sort of distribution. The dashed line is for the damage done just by the antimony itself. And you get a very different damage distribution. So what this says is for a heavy ion like antimony, antimony is big enough that it can impart a lot of energy to silicon ions, or the silicon recoils. And the silicon recoils transport a lot of that damage energy deeper into the substrate. So if you don't take it when you calculate damage, and you have a heavy ion coming in, if you don't take into account the silicon recoils, you're going to get pretty somewhat inaccurate distribution of where most of the damage is done. So you really need to use the solid line and include the silicon recoils in your damage, because the silicon atoms then go on and damage further the lattice. They impart energy to other silicon atoms. This is for 100 kilovolt antimony. But again, look where the peak is. The peak and the damage still occurs at about close to 80% or so of the projected range of the antimony ion. So that's a handy rule of thumb. Now, what can we do with a damage energy profile like this, or damage density versus x, or x over rp? Well, if you know how much damage in a given angstrom, how much energy in a given angstrom will tend to produce amorphization, you could actually use this plot to figure out which part of the lattice is which depth, or amorphized. And so here's an example. What I'm doing here, I'm taking this calculation, which we showed for boron a couple of slides ago, and I'm assuming that there's a threshold for amorphization. Now this has been both calculated and people have tried to measure it. I'm assuming the threshold for amorphization at room temperature is 6 electron volts per angstrom. So people have kind of looked at this either theoretically or by looking at actual amorphization zones, amorphous zones, and said that when you get above about 6 electron volts deposited into nuclear processes per angstrom of depth, the silicon has so much damage that it goes amorphous. So it's lost its crystal structure. In fact, if I draw a line here at 6 and I see where it cuts this profile, that would say that this substrate, if you're implanting boron at 100 kilovolts, I don't know what the dose was in this particular case. But it would be amorphized between about half of rp and up to about rp. So in that region between half rp and rp, between these two lines, it's predicted according to this model, and again, it's going to be very sensitive to your amorphization threshold, this region in here will be amorphized. Everything else will be heavily damaged. So you can see what boron tends to do. Because of its distribution, it tends to create a buried amorphous layer with heavily damaged single crystal silicon on either side. And I just want to say a little more about amorphization, shown here on slide 17. As we've talked about before, if you give it a high enough dose, you have enough of the crystal is displaced, it becomes completely amorphous. It loses all of its crystal structure. There's no more long-range order. At this point, we have a random arrangement. And the damage accumulation is saturated and really can't talk about damage anymore. Once you've amorphized, there are no lattice sites, so you can't really knock somebody off a lattice site because there is no lattice site. It's no longer a lattice. But it just gives you an idea. This is taken from your textbook. This is a cross sectional TEM, so this is transmission electron micrograph pictures of an amorphous layer formation with increasing implant dose from left to right. So this is at very high energy. It's 300 kilovolts silicon being implanted into a silicon substrate. So the interesting thing is, the ion now is silicon. It has the same mass as the substrate. It's relatively light, though. So here at 1 e 15, what do I see? Well, here's the surface of the silicon substrate. And you see there's this band, this region, where there appears to be darkness. There seems to be a lot of damage, a lot of stuff going on here. Dark contrast, you can imagine, might be associated with some kind of dislocation loops or some kind of damage in the substrate. Here's a band here. Now at 1 and 1/2 times 10 to the 15th, there's a band, which really has, it's buried and it's lost its crystalline structure. If we were to Zoom in here and do a selected area diffraction on this region, we'd find it's not a crystal. There will be no diffraction Laue spots. Whereas if you go down here, you would see crystalline. Here's that amorphous layer at 2 times 10 to the 15th. It's increased in its width. It still hasn't reached the surface, though. And at 4 10 to the 15th, it's increased in depth, but again, it hasn't quite reached the surface. 5 10 to the 15th, almost. And finally, at 10 to the 16th, that's a very large dose, you've created an amorphous layer all the way from the surface down to some depth. I don't know what this is, half micron or whatever, and completely amorphized from the surface down. And this is all at room temperature. It turns out, it's much easier to form an amorphous layer at low temperatures, 77 Kelvin. So you will sometimes see people specifying their implant energy. They want to use a special ion planter that does, if you're doing some of these materials experiments, that can hold the wafer at 77 Kelvin. That means the damage is much more stable. Remember, we were talking about that simulation from Lawrence Livermore Labs. You saw that damage cloud and then a lot of recombination by diffusion and things. Well, if you're at low temperatures, you have a lot less recombination, so the damage tend to be more stable, especially with the light ion. So if you want to amorphize silicon with a light ion like boron or silicon, you pretty much, to do it reliably, you pretty much have to hold the wafer at low temperatures like 77 Kelvin. If you have a heavy ion that does a lot of damage, like arsenic or antimony, you can easily amorphize a layer from the surface down at room temperature. OK. So let's go on to slide, that's a picture of damage. Let's go on to slide 18 and talk about damage annealing. So we have to do annealing after we do an ion implant, because we've bashed up the crystal. So what do we want to do when we anneal? We want to remove the primary damage caused by the implant. You want to put all the dopants onto substitutional sites, that way they can be donors or acceptors. We try to make the crystal as perfect as it was when you first started. You never really get it quite as perfect. We'd also, in doing that, we want to restore the electron and hole mobilities, and the carrier lifetime, hopefully, to what it was. And you want to do all these things without really having much dopant diffusion. So it's a tough job. It's a tough job. And this is a model for damage annealing, relatively simple, but that was published back in 1991, that is very famous now, by Martin Giles, which is called the plus-one model for residual damage. Kind of a funny name. But what Martin did was he said the following things. He said that most recoiled silicon interstitials-- so again, we're going to call this guy who's recoiled, who ends up in an interstitial space, an i, or a silicon interstitial-- most of them will find a vacancy. And they'll recombine very, very rapidly, either during the implantation process or in the first few seconds of annealing, a lot of recombination takes place. In fact, he calculated the distribution of remaining recoils after ion implantation, without even any annealing. And what he found, there's a net excess of vacancies near the surface and a net excess of interstitials towards the bulk. But there's still a lot of them recombine. So this is a calculation from Giles, and he's got concentration of interstitials and vacancies as a function of depth. This solid line, or this line on top here, that starts at around 10 to the 20 or mid 10 of the 20 and then goes down, these are the total interstitials and vacancy concentrations he calculated. They're almost on top of each other. On this log scale, you can't tell the difference. So you create a tremendous number of interstitials, tremendous number of vacancies, but most of them all recombined. And in fact, what he's plotted here is the net interstitials. So that's the number of interstitials minus the number of vacancies at a given point. So that's how many are left over. So the net interstitials look like this. In fact, if you plot the net profiles here in the near surface region, you have net vacancies. So there's extra vacancies. Down in deep, you have net interstitials. But the total numbers of these net are much, much less than when you do the subtraction of the interstitials from the vacancies, than the total interstitial or vacancies that were actually created. So that's this point. To within three orders of magnitude, a lot, most of these guys recombine. So to first order, what he said is that you can imagine that all of the original damage recombines, and leaves behind only one interstitial created for every phosphorus atom, or every ion-implanted ion that is ion implanted. So you can imagine. Let's say you are implanting a dose of phosphorus. In this particular example, my guess is 80 keV, 10 to 13th. He was saying that there's one interstitial created for every phosphorus incoming. So he would say there's 10 to the 13th per square centimeter silicon interstitials created. So it's incredibly simple. You just take the dose of the atoms that are implanted, of the ions that are implanted, and say that's the number of interstitials per square centimeter that I create. 10 to the 13th. It's called plus-one because, and intuitively what he's saying is, if your implant is totally activated, so every implanted phosphorus ion finds a substitutional site, well, where did that silicon go? It had to knock. That silicon is somewhere as an interstitial. OK? And that's all he said. He said, assuming your annealing is good, you know what you're doing, to first order, so this is to get an order of magnitude estimate, the number of interstitials I create is equal to the dose of what I implanted. So that's pretty simple. I implant 10 to the 15th, I have 10 to the 15th excess interstitials now in my crystal. And that's the so-called plus-one model. This just tells you a little bit more about how accurate that assumption might be. This is on slide number 19. I took this from your textbook. It's a little bit about damage evolution in time. So what it's a plot of is the annihilated interstitials, so the recombined ones, and vacancies per implanted ion. OK. And it's kind of here as a function of time. So if you look at, these are Monte Carlo simulations basically of interstitial and vacancy recombination. And if you go after a short time, really, only excess interstitials remain. And these can end up forming clusters. But basically, this bulk and surface recombination take place on a very short time scale. So look at the bulk recombination. Very quickly after here, after only, I don't know, 10 to the minus 6 seconds was at a microsecond. There's a lot of bulk recombination that's taken place. Surface recombination, for vacancies, takes a shorter time. And then finally surface interstitial recombination can take place within 100 to 1 second. So you have a lot of interstitial and vacancy recombination, that was the point of what Giles was saying, so that only plus 1 interstitial excess interstitial remains. Now, what happens to those 10 to the 15th? If I ion implant 10 to the 15th phosphorus atoms, I have 10 to the 15th excess interstitials. The name of the game is, what happens to all these interstitials? Well, it turns out they coalesce. They get together into little defects. And these little defects are called, in curly brackets, called 311 defects. And we'll talk in detail next time about 311s. But the interesting thing about these little clusters, these 311s, they are defects end up being stable for very long periods. So the interstitials come, we have all these excess interstitials, they end up getting together and forming little defects, which then can stay around for 10 seconds, 100 seconds, maybe even minutes and hours, depending on the temperature. And it's these little defects these 311s, that end up being responsible for the process of transenhanced fusion that we're going to model in the next lecture. In fact, on slide 20, just by way of introduction. These are pictures of [? 311 ?] defects. The right hand side is an actual high resolution cross-section transmission electron micrograph. And I can tell it's high resolution, because you can see these little dots, and they form in planes. Well, these little dots, each dot represents two silicon atoms. And they're all lined up in planes, because you're looking at the crystal planes in the silicon lattice. And you see this little funny looking thing right here that's tilted at an angle. It goes from this point here, this dark band of contrast, to this point here. That is a ribbon-like region that is identified. That is the [? 311 ?] defect. And that's an actual cross section TEM micrograph on the left. It is meant to represent a color cartoon of what this thing actually looks like. The axis here, along the defect, is represented by this vector here, that turns out to be, this vector turns out to be in the [? 311 ?] direction. That's how these things got their name. They were tilted in the [? 311 ?] direction. The long axis of the ribbon, this is a ribbon-like defect, the long axis points along 1 0 direction in the crystal. So the long axis here, you can't really see. It'll be into the board or into the slide. These little round things here, these circles, represent interstitials, and they're little dimers. They come in pairs, and they line up along this direction. Could be 100 angstroms long. And it's got some sort of width to it, which you can see here. And it has a certain capture radius. Within a certain region, it will capture these interstitials and form this little cluster. These [? 311 ?] defects, they flow in pretty quickly, in a matter of a second or less. But they anneal out in timescale of minutes, at moderate temperatures. And as they anneal out, as this ribbon heals, it spits out silicon interstitials. And it's these silicon interstitials that are coming out of this [? 311 ?] ribbon that lead to the transient enhanced diffusion effect, these excess silicon interstitials. It turns out that below a certain dose, or below a certain damage value, these [? 311 ?] defects can dissolve completely. So you can dissolve them. You can completely get rid of them. And at that point, when they're completely gone, you have no more TED after that amount of time. You go back to normal diffusion. Above a certain damage level, actually, it's a little more complicated. They actually get together, they turn into stable dislocation loops, which are more difficult or sometimes impossible to remove. That's like end of range damage. But there's a certain region in which you can remove all the [? 311 ?]s,s, certain amount of damage. OK. So when we're going to talk about [? 311 ?]ss and their annealing kinetics in great detail next lecture, when we go through the whole kinetics of transient enhanced diffusion. In the meantime, I want to go on and continue to talk about other aspects of annealing. So we know we create all these interstitials. They get together in [? 311 ?]s. We know if we go to high enough dose, remember I showed you a picture that we can amorphize the crystal, well, let's say I do create an amorphous layer at the surface. How can I get rid of that amorphous layer and restore the crystal to perfect, or to some kind of crystallinity? Well, this is exactly what happens. I'm showing here cross-section TEM images. This layer is originally amorphous from the surface of the wafer down to some depth, and we're regrowing it. So we're annealing the wafer at 525. The initial implant was quite high. It's 200 kilovolts, 6 times 10 to the 15th of antimony. Again, antimony is a very heavy atom, so it amorphized from the surface down. And we're looking here, initially, at 0 minutes. So guess that kind of got cut off of the slide, but this should be here at 0 minutes, so just when you first start. After 10 minutes of annealing at 525, look at the interface between the amorphous and the crystal substrate has moved up. So the amorphous layer is actually grown, or regrown, by a process called solid phase epitaxy. So there's no melting going on, but in a layer-by-layer method, the atoms in each layer are taking the template from the substrate and rearranging themselves back into a single crystal form. And this amorphous layer is progressing up from the depths towards the surface, you can see it as time goes on, in kind of a linear fashion in an epitaxial growth. So we call it solid phase EPI as opposed to vapor phase. Vapor phase would be if I were injecting xylene or something and growing EPI Here I just have a solid solution that's regrowing at 525. When you're done-- or, well, this isn't quite done. At 20 minutes it's almost done. Maybe half an hour-- you would see the entire amorphous region will have been regrown. It'll all be single crystal. And you'll be left with something though. There's a band of residual damage that occurs called end of range damage, or EOR. And these are defects that are just below the amorphous crystalline interface. So you get an idea of where they are. They're always just below that. And those are pretty hard to get out. Those generally don't ever go away. You have a certain amount of that, for such a high dose, that you have to live with. You have to find a way to deal with it. This slide on page 22 actually shows you some data from a book by Gibbons and Sigman, called Laser Annealing of Semiconductors. So it's about laser annealing. The book is. But this particular chapter is about solid-phase EPI or solid-phase regrowth, just in a furnace. And what they're plotting here is the regrowth rate. So this gives you the number of angstroms per minute as that amorphous layer traverses from down below up. The number of angstroms per minute that it goes, as a function of inverse temperature for silicon and under different conditions. So let's take a curve here. Let's take this one curve right here that is called 100 silicon undoped as a function of temperature. And you can see, for undoped silicon, say at 525, the regrowth rate is about 20 or 30 angstroms per minute. So as a constant rate, these people have measured this, there's a technique called Rutherford backscattering, ion channeling, you can actually measure this rate going up. And you can see how fast it goes. Gives you an idea of how that amorphous layer regrows. And that's at 525. So you can see, amorphous layer regrowth can happen at reasonably low temperatures. You don't have to go too hot. Interestingly, though, there are lots of different curves here. If you go to the 110 silicon, so if you buy a wafer that is 110 surface instead, you've got a different atomic density on that 110 plane, and in fact, it regrows slower. The same activation energy, ea of 2.3 electron volts. Interestingly, that number is kind of intriguing. That's also what people believe 2.2 is roughly the bond breaking energy. And so imagine that interface between the amorphous and the single crystal, you probably have to break some bonds there, initially in order to realign the atoms. So SPE is pretty fast. It depends on orientation, and it depends, what's really interesting, it depends very much so on doping. So for example, here's silicon that's doped with arsenic. You can be quite a bit faster, 3 to 5 times faster, or phosphorous, or boron. Boron is the fastest, it looks like. If you accidentally put oxygen or nitrogen into your sample, look what happens to the regrowth rate. You can go down by almost an order of magnitude. So impurities have a big effect. And argon would even really do it, really frustrates the regrowth. And so it's important to know what's in your crystal, what your ion implanting will have a big effect on the regrowth rate. Interestingly, when you do this, most of the dopant atoms, like arsenic, phosphorus, boron, are incorporated as that amorphous layer moves up. They're actually mostly incorporated on substitutional sites, even at low temperatures. So it turns out, if you can create an amorphous layer, this is one of the best ways of activating a dopant, is to create an amorphous layer in silicon. Now this is the way silicon anneals. Other semiconductors, gallium arsenide for example, is just the opposite. If you amorphize it, God help you trying to get the dopants to get activated. It's not so easy. But silicon has this beautiful property that as that layer moves up, a lot of the dopant atoms get forced onto substitutional sites. And then the donors and acceptors are already activated at low temperature, to a great degree. So that's a nice property. Assuming you fully amorphize the crystal. OK, let's go on to slide 23. I mentioned that life is not perfect. Even if you amorphize, you always end up with this end of range damage. And here's a plot on the left of, let's say this is the surface of a silicon wafer here, going down in depth. And here's my concentration. And let's say I have implanted something, maybe it's arsenic, I don't know what. And it has this ion implanted profile. It looks like this. And there is, the maximum damage for this particular profile is just below the amorphization threshold. So this region from here to here, from the surface down to, this dashed line is called the amorphous crystal interface, that's all amorphized. So that's going to regrow layer by layer and give you very good crystal quality, if I just go back a couple of slides here. Again, it was amorphized to this depth. If you look at this depth when you're done, there is no residual damage. It's below the initial amorphous crystal interface by some distance, that you end up with it's end of range. And it kind of makes sense, because at this point, you have deposited-- averted this depth enough energy to amorphize the crystal. Below that, you've deposited a huge amount of energy, but not enough to amorphize. So you've really bashed it up, but it's not amorphous, so it can't regrow by solid phase EPI Instead, it creates dislocation loops, and that's what we call the end of range. So if I go back to the slide number 23, just below the amorphous region of this end of range damage. And here is my ion implanted arsenic. OK. So. Now, let's see. I'm using this to make an np junction. So what I need to do, if I'm going to make an np junction, what I typically want to do is just to diffuse the n type dopant a little bit deeper than the end of range damage. So we tend to try to get a little bit of diffusion after we've done an ion implant. Sometimes, that can be helpful, because why would that matter? Well, here's my end of, I've done an overlay. This is a cross-section TEM micrograph. And here's my end of range damage, this little black dislocation bar. And imagine I diffuse this arsenic just a little deeper. So now I have an n plus region that goes just below the end of range damage, and then below that I have p. Well, this is n plus, this is p. The depletion region, again, always tends to extend, if you remember your np or pn junction physics, the depleted region is going to be in the lightly doped side, where the yellow is. So the yellow is depleted of free carriers. And you know, in the depletion region, that's where you get the most amount of recombination of electrons and holes to create leakage in a pn junction. So the depletion region is the last place you want damage or defects, because in the depletion region, electron hole recombination is very efficient. So what we do, what people do, this is kind of cheating, but they cover up the end of range damage with a heavily doped region. So if you see people making source drain junctions, they often diffuse it just a little ways beyond the EOR, and then they get diodes that are pretty good. Reasonably good. If they don't diffuse it at all, they say I'm going to have a perfect anneal, no diffusion, in fact, then the n plus junction might have been back here, and then the EOR might have been in the yellow region, in the depletion region, at the end of range damage, and you could get a lot leakier diode. So there's a lot of optimization when people activate source drains, or something where they care about leakage. There's a lot of optimization in just exactly how much damage dopant diffusion do I allow, and how much time do I allow to try to get rid of these loops? So it's kind of an interesting, damage annealing is an interesting combination between knowledge of p n junction physics, and electronic recombination mechanisms, and knowledge of the crystal structure and what kind of damage you've created. And you need to optimize that, because you'll never get rid of all damage, to a certain extent, for every single implant. Some implants are easier to anneal than others. OK. So that was in terms of, I spoke there quite a while about trying to get rid of damage, because I wanted to get good pn junctions. I want to get low leakage. OK. That's only one requirement, is low leakage in the crystal. I also want to activate the dopant. You want to ion implant 10 to the 15th arsenic atoms per square centimeter, you'd like to get 10 to the 15th electrons per square centimeter. You'd like to have all of that activated. Or boron. That's the whole reason you're putting it in. You don't care about boron in the lattice. It's the holes that it produces, or the electrons that arsenic produces. Well, it turns out, if you amorphize the substrate, again this only applies to silicon, gallium arsenide is just the opposite. But if you amorphize the silicon that we mentioned, that a solid phase epi is an ideal way of repairing the damage and also getting the dopants onto substitutional sites. If you don't amorphize, so you do a lower dose that does a lot of damage but doesn't create an amorphous layer, so you can't have SPE, it turns out that activation is a lot more complex, because you create defects that are somewhat stable over time and temperature. So here's just an example, of a plot of the annealing characteristics of arsenic and silicon I took out of your textbook. And the y-axis here is the fraction active. So I want to be up to one. If I'm at one, that means every single arsenic atom that I implanted contributes electron and they're all electrically active. And this is as a function of annealing temperature. And there are different doses shown here. So the red curve assumes I've amorphized the silicon. And look at this. Very interesting. At 550, 525, I can activate for 1e-15 implant, I can activate almost 80%. That's pretty good. And if I increase the temperature up to 800, it doesn't actually activate much more. So if I want to do a really low temperature anneal in silicon, I can do that with a 1e-15 implant. Except for 20% of my atoms, I get them active. But let's say I only implanted 1e-14 instead. At that same temperature, only 10% are active. Very few. So if I have an intermediate dose 1e-14 might be more typical dose, say, for the source strain extensions, might be 1e-14 That's a miserable dose, because it's pretty high to do some damage, but it's low enough that it doesn't amorphize. And so to activate the extensions here, this 1e-14 dose, you really need to get up to 750, 800 or whatever to get 80% activation. So we don't just think, oh for any implant I want I just do a certain anneal and I'll get the same results. It really depends on the amount of damage I did, and whether I was below or above the amorphization threshold. Here, if you go to really low doses, it's a piece of cake. Again, here at 10 to the 12th it's not bad. Because you did so little damage even at 600, 700, you can get pretty close to 80% to 100% activation. So the tough spot is this intermediate dose regime which is exactly the dose regime where we're ion implanting source drain tips or source drain extensions. And those need to be active as full as possible. So unfortunately, it does make our life a little more challenging. That was arsenic. That actually isn't that bad. Even more complicated is boron. And again, I took this plot from your textbook. It's the same kind of plot, now I'm on a semi-log plot, so I've got a log scale on the y-axis and temperature of annealing on the x-axis. And different doses again. So for boron, if you have a low dose up here, 8 times 10 to the 12th, again at 550, 600, I can get, oh, maybe 70, or 50% to 60% active. Not that great, but not terrible. Interestingly, at the very high dose regime, 2 times 10 to the 15th, if I kneel at very low temperatures, say 500, I get a reasonable active, well it's not very good. It's only 1 in 10. But it actually goes down. It actually goes down in a certain temperature range between 550 and 700. So you're actually reverse annealing. You're taking the dopant atoms off site. And then finally, it takes up again. When it takes off again, when you get up above about 750 or 800. So to get full activation of boron, you really need to be in silicon. At almost any dose, you need to be in the high temperature regime, 900 or above. And that's a real problem. That's why you will hear people say, oh boron is hard to activate. In general, it requires higher temperatures or some kind of more sophisticated annealing technique than arsenic. And that's because of the type of damage it does. It does not tend to amorphize, unless you go to really high doses. It's very hard to get an amorphous. And so you cannot take advantage as easily of the solid phase epi regrowth. Now, one thing you can do. You say, OK, boron doesn't amorphize. What could I do? People do something called pre-amorphization. So before they implant a boron profile, they might implant a high dose of silicon, amorphize the crystal to a certain depth, then put the boron in by implant, and then regrow the whole thing by SPE. And you can cheat that way, in some sense, or it's an extra step, but then you can get a little better annealing characteristics. But this is assuming you didn't do any pre-amorphization. This is just, implant the boron at this dose, and try to anneal it. So what is this reverse annealing behavior? Well, it's probably, oh, there's a typo here. It's thought. There's a T missing. It's thought to occur. Because there's some kind of competition here between the native interstitial point defects, the things that you're creating-- remember, you create more point defects as you raise the temperature-- and the boron atoms themselves for the lattice sites. Because again, we did not amorphize the crystal. We just created a lot of damage. So there's a sort of range where you just don't want to be annealing. Here at this dose of 2 times 10 to the 14th, you want to take the boron up higher. The problem with going to higher temperatures, of course, is you get more diffusion. So let me go on to slide 26 and just say a few words. We're going to spend the entire next lecture on this, on something called transit enhanced diffusion, and this is sort of exemplified here. Here is an implant that was done, say, of it could be boron or arsenic. It doesn't really matter at this point. But let's say it's boron. And we see two different anneals. The blue curve I did 1,000 degrees C for 10 seconds, and I got that diffusion profile after I implanted the boron. And now the red curve is 2 minutes at 800. And so, interestingly, you would think, well, I'm annealing it at 800. That's a much lower temperature. The boron diffusion coefficient is down by two orders of magnitude at 800. So even though I'm annealing it for a slightly longer time, it's certainly not 100 times longer or 200 times longer. So you would think that the 800 degrees C profile should be shallower than the 1,000. So TED, unfortunately, is this strange effect, and we'll talk about how this can happen, is that even at lower temperatures you can even get deeper junctions than you can get at higher temperatures. So this really annoyed people and was quite confusing and it needs to be explained by the implant damage and its effect on how the boron diffuses. And in fact, TED today is pretty much the dominant effect that determines junction depths and shallow profiles. It's not so much ordinary diffusion or concentration dependent diffusion. Well, for arsenic, maybe, but for boron, everything is. You cannot possibly model a shallow junction unless you know how to model TED. And if I just go on to the next slide, just to show you, if I had given you that problem and told you nothing about TED, and you would have looked up in your book, say, at 800 degrees-- which is somewhere right around here, here's the boron diffusion coefficient, right at this point-- and that at 1,000 degrees it's up here and the difference between the two is more than two orders of magnitude. So it's almost a factor of 1,000. So clearly, if you're diffusing at 800, it should be a lot less, a lot shallower junction, even if you're doing it for two minutes versus ten seconds. Two minutes is 120 seconds, so 10 into that. So I'm doing it 12 times longer. Fine. But the diffusion coefficient is 1,000 times, should be 1,000 times slower at 800, but it's not. In fact, it's 1,000 times faster. It's a lot faster. And that whole effect is called trans enhanced diffusion. It results from all the interstitials that were implanted, that plus-one interstitials giving rise to [? 311 ?]s and all that. And so the next lecture, we're going to spend trying to understand these profiles and their time dependence. So let me summarize on slide 28. Basically, so far, in this lecture, we said that we can separate the energy loss processes completely independent of each other. To first order, we have nuclear processes, which are these billiard ball collisions, and we have electronic drag force. The nuclear stopping dominates when you have a heavy ion over most of its path. For a light ion, nuclear stopping dominates only at the end of range, at very low energies. It's the nuclear stopping that damages the crystal by the creation of silicon recoil. So it knocks silicons off of their lattice sites and they knock other atoms off. And that creates what's called a collision cascade. The damage profile peaks right near rp. If you want to calculate that, you can just, maybe 80% of rp gives you a rough idea of the depth of damage. For heavy ions, the damage is more stable. That is, it doesn't tend to anneal out in the ion implanter at room temperature. And there's a tendency for a heavy ion to form an amorphous layer from the surface all the way down to some depth. That amorphous layer can be regrown relatively easily in silicon. Not true in other semiconductors, however. It results in relatively efficient dopant activation. For light ions like boron and phosphorus, below a certain dose, it is difficult to produce an amorphous layer at room temperature. And as a result, the activation of the dopants is a lot more complex. It's very dose dependent, and it can be temperature dependent in an odd fashion. There is this plus-one model for residual damage that says that there's roughly one excess interstitial for every primary ion that I ion implanted. So it's relatively easy order of magnitude estimate. And this is after the initial vacancy and interstitial recombination is taking place, which only takes a few fractions of a second. These excess interstitials, however, they cluster into little [? 311 ?] defects. Those defects dissolve at a relatively slow rate on the order of ten seconds to minutes, and it's those little, that [? 311 ?] evaporation or dissolution, that gives rise to the time dependence of TED. So by understanding [? 311 ?] defects to a certain extent, we can understand the time dependence of TED and, in fact, by understanding [? 311 ?] defects, we'll be able to come up with a model that could predict this kind of strange looking behavior. OK. So that's what I wanted to go through today. There were three handouts. Make sure you've got them. Today's lecture notes, homework number four which is due on November 2, and the solutions to homework number two. The solutions have been on the web for a while. I just forgot to hand out the paper copy. OK. So we'll see you next Tuesday. Hopefully we get to watch the World Series. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 18_Thin_Film_Deposition_and_Epitaxy_CVD_Examples_and_PVD.txt | JUDY HOYT: All right, I think we can go ahead and get started. I'm going to put this-- is anybody here-- I think everybody who's here has already signed up. I'm going to put the clipboard in the back. And a couple of people haven't signed up, but wanted to-- everyone who signed up, I've actually approved your topics. But there's a couple people who haven't filled in their topics. So obviously, I can't approve them until they're filled in. So I'll put this back here. There's two handouts for today, the lecture notes and the homework, your final homework. And I'm handing back some homework sets that people haven't picked up yet. They're in the back. OK, so let's just take a look at where we are schedule wise. So today is November 9, and we're on the second lecture here on chapter 9. So hopefully you're reading chapter 9 of your textbook. And homework number 5, I'm handing out today. And that's your final homework. So I think it's an interesting problem set. One thing I want to remind you is that there is a part B on the back, so don't miss part B on the back side of the homework. There's three problems. And particularly problem number 3, I think you'll find interesting. It's about transient enhanced diffusion. Part A, you'll be doing some hand calculations of TED and using some analytic formulas for ion implant and diffusion. And part B, which is on the back side, again-- don't miss that-- you'll do some SUPREM simulations and then compare it to the hand calculation of the TED. And the other thing for me to mention about part B, this is the first homework where you're not given the SUPREM code. SUPREM codes you've been running all along, have been given to you that have been written by the TA, and you just have to run them. 3B, you're actually asked to write the code. Now, what you can do, clearly, is you've been given a lot of codes. You can take the examples of what you've been given as your starting-- you don't have to start from scratch. You can take other files and just modify them accordingly. And it's a relatively simple process. But I just wanted to make you aware that there is a part B on the back side. And I think problem number 3 should be an interesting one for you. And as I mentioned, that's the last homework for the course. The reason I make just the last problem set, this one is due on November 18, because I want to give you enough time between that last problem set so you can get the problems out of the way, enough time to be working on your final project. So that's either the written report or the oral presentation. So definitely, if you haven't done so, get started on your project soon. All right, so that's the mechanics of where we are. And let's go on with today's-- I'll start with today's lecture handout. This is handout number 31. And this is the second lecture on chapter 9. Chapter 9 is about thin film deposition and epitaxy. Last time, we gave you an introduction. We developed a very simple model for atmospheric chemical vapor deposition. And remember, we had two rate-limiting regimes. We had a regime of growth where the growth rate was determined by the surface reaction rate. And it had a certain activation energy, dependent exponentially on temperature. And we had another regime where the growth rate was determined by mass transport. That was the rate-limiting step. We also talked about the fact that the mass transport regime gives you a very high growth rate. This tends to be at high temperatures, where the surface reaction is fast. Gives you a high growth rate. It's relatively independent of temperature in contrast to the surface reaction control regime. But it's also sensitive to reactor geometry. So the growth rate and uniformity depend quite a bit on how the gas flows through the reactor. We also introduced this LPCVD, which is an acronym that stands for low-pressure CVD. And we introduced the use of LPCVD in a hot-wall batch reactor, which is in just about every fab that you'll ever see. And it's used to deposit very commonly-used films, such as polysilicon for the gate or other applications, oxide, deposited oxide, and deposited silicon nitride. So in the next lecture or two, what I want to discuss are some specific examples of CVD. So I want to discuss specifically the aspects of the deposition of polycrystalline silicon, deposition of silicon dioxide, and silicon nitride. And also, what we'll talk about towards the end of today's lecture is introduction to the basics of physical vapor deposition. So we have chemical vapor deposition and physical. OK, let's go on to page number 2, or slide number 2 on your handout. This is just a chart I took right out of your textbook in the section in chapter 9 on manufacturing methods for CVD. And what it gives you is just some common thin films that are deposited by CVD. I think these are the most common, episilicon, polysilicon, silicon nitride, and SiO2. It talks about the type of equipment used. For example, we talked last time about epitaxial silicon can be either atmospheric or low-pressure CVD. There's actually another type of epitaxial silicon that wasn't listed here because it's not commonly used in fabs that much. But it should be listed for completeness if you want to put it down. Ultra high-vacuum CVD, UHVCVD is another method of depositing or growing epitaxial silicon. And that wasn't listed in the text. Some typical reactions that we talked about, for instance, the use of silane to grow epitaxial silicon at high temperature. The silane decomposes into silicon and hydrogen and evolves hydrogen. Under comments here is listed some typical temperature range. Here, if you're an atmospheric pressure CVD, you typically don't grow too much below 1,000, maybe 900 degrees centigrade. Much more common these days, though, is reduced pressure deposition of epitaxial silicon. We talked about that. Here, the deposition temperatures are lowered significantly compared to atmospheric pressure. And people can deposit epitaxial silicon typically all the way down to 750, maybe 700 even in the reduced-pressure systems. Besides silane, we also talked about silicon tetrachloride being used, also, trichlorosilane, shown here, and dichlorosilane, probably one of the most common gases, so dichlorosilane for selective epi growth. Polysilicon, which we mentioned last time, it's put down in a batch furnace, typically, by LPCVD. The gas reactions and the precursors are the same, typically, as used as epitaxial silicon. I would say that silane is really the precursor that people use for polycrystalline silicon. Polysilicon is typically put down somewhere-- and it's pretty tight temperature range, 575 to 650 degrees centigrade. The morphology of the film and its grain structure-- and we'll talk about that in this lecture-- depends on the deposition conditions, the temperature and the doping quite a bit. So you adjust your temperature to achieve the type of grain structure that you need for your application. Silicon nitride can be put down by an LPCVD process as well as we talked last time about plasma-enhanced CVD. This is becoming more and more common. For a low-pressure CVD, there are two different types of reactions that are commonly used. Silane is reacted with ammonia to create a silicon nitride. The other one, the most common one, the one that's used here at MIT in LPCVD is to use dichlorosilane, react that with ammonia and obtain silicon nitride. This reaction typically goes or is typically deposited to get reasonable depth rates at around 800 degrees C. So LPCVD nitride is one of those things that it's at a temperature where we talked about in the last several lectures transit-enhanced diffusion can be an issue. So it's right in that intermediate temperature range. What else do people use? Well, an alternative people sometimes use is low temperatures plasma-enhanced CVD for a passivating layer. Doesn't have as good of properties or the same properties as LPCVD nitride. And I'll talk a little bit more about that. The last one here, probably one of the most important, silicon dioxide. Remember, earlier in the course, we talked about growing silicon dioxide. Well, now we're talking about depositing it, which is quite different. We react typically silane as a precursor with oxygen to form SiO2 so that you don't have to have silicon exposed on the wafer. You don't consume, necessarily, much silicon on the wafer. This is listed of between 200 and 800. That's a pretty wide range. Really, the most common is between 200 and 500. In fact, LTO is a very common acronym you'll find in the fab, LTO, as we call it. Typically, LTO has put down between 350 and 450. That's really a more reasonable range. In order to get reasonable properties, so to reduce the etch rate and things, people usually have to high-temperature anneal LTO. So they have to what they call densify it because it's not a stoichiometric SiO2. In modern fabs, you will also find a very common process called TEOS. People refer to LTO sometimes as TEOS. And TEOS refers to this particular organic compound. And you notice it's a compound that includes silicon along with an organic group, a C2H5 group. You react that with ozone at a very low temperature, say, 400 or below, you can get very high DEP rates of SiO2. Now, the only thing about TEOS is that you need to be aware of, what's different is there's an organic element in the precursor, some of which can potentially be incorporated in the film. And so TEOS films may have more carbon incorporation or carbon contamination. And some of their properties will be a little different from LTO. But TEOS is very commonly used in fabs today. OK, so that's just a little background on the different types of materials. Let's remind ourselves about low-pressure, here on slide 3, about low-pressure CVD. Atmospheric pressure systems have some major drawbacks. For example, if you operate them at high temperature, the gas flow makes a really big difference. And they have to be designed just right. You typically need a horizontal configuration, as we showed last time. And you only can really do a few wafers at a time in order to get any kind of real uniformity. So atmospheric pressure systems are really not that commonly used anymore. If you try to operate them at low T, you say, OK, I'll go to low temperature where there will be surface reaction rate-limited, but the deposition rate goes down quite a bit. So there is a solution, though, which is to operate the reactor at lower pressures. So instead of 760 torr, you might operate at 1 to 10 torr. Now, if you go back to the mass transfer-limited regime, remember, there was a transport coefficient, H sub G. So this tells you this is the limiting step in getting from the mainstream of the gas to diffuse through the boundary layer to the surface where it can react. So this mass transport coefficient HG was equal to a diffusivity through the boundary layer divided by the boundary layer thickness. So delta here is the boundary layer thickness. But you notice the diffusivity in the gas phase is proportional to 1 over the pressure, 1 over the total pressure. So this is the key leverage that you're going to use by reducing the pressure. So the diffusivity will go up by about 760 times. If I go from atmospheric pressure-- remember, atmospheric pressure is 760 torr-- if I go down to one torr, this is going to go up by 760, while the boundary layer itself is also going to increase but not quite as much, only seven times. So we can get about a factor of 100 in HG if I drop the pressure from atmospheric down to one torr. And then in the one torr regime, the transport of reactants from the gas phase through the surface boundary layer is no longer rate-limiting. So it can really speed up that process. In fact, we can show this graph-- let's look at slide number 4, and you can see this graphically. This is a plot similar to what I showed last time. It's the growth rate on a log scale versus 1 over T. So we typically call this an Arrhenius plot. And for atmospheric pressure CVD, it's this lower line right here is the net growth rate. And at low temperatures here, your surface reaction controlled. But look what happens. You get to a certain temperature, and there's a knee in the curve. And it sort of flattens out. So at 760 torr, it flattens out. And you can only get a certain growth rate for atmospheric systems. Now, if I drop the pressure to one torr, again, I could get about a factor of 100 increase. Remember, this is a log scale. So going from here to here doesn't look like much. But this could be a factor of 10 to 100 in growth rate at one torr because why is this happening? Because the gas phase diffusion rate through the boundary layer is increasing as I drop that pressure. So can really up the growth rate by going to low pressure. And so that's how the two curves-- notice the two curves converge, though, pretty closely. When you get down to low enough temperatures that you really are surface reaction controlled, you can drop the pressure by a factor of 1,000. You only get a very small increase. You'll get some increase or some change, rather, in the growth rate but not so much because the surface reaction rate is rate-limiting at that temperature. So that's one of the big motivations for going to low pressure. Let's go to slide number 5, and we talked about this last time, but I want to give more detail now on the deposition of polycrystalline silicon. We introduced this rudimentary reactor. This type of setup is found pretty much in every fab, maybe slightly different configuration. But it's essentially a hot-wall furnace. It's a hot-wall furnace and it's resistively heated. The wafers are usually put in quartz boats or quartzware. And they usually stand up together very close to each other. And it's a batch process. So you can get 25, 50, maybe 75 wafers in a batch, quite a few wafers. You have some gas sources, consisting of silane and nitrogen and maybe dopants if you want, that flow into this quartz tube into the hot zone in there. The pressure, the low pressure is maintained by a vacuum pump. So this is not an open tube furnace. The mouth of the furnace where the wafers come in and out actually is sealed with an O-ring so you can pull a vacuum. Typical depositions, usually, polysilicon is deposited on oxide, maybe, or on an oxide with silicon windows. It's a nonselective process. So it goes over the entire wafer on all structures. The diluent gas, so what is the usually flow? Sometimes people flow nothing. Sometimes it's just silane flowing in the tool, relatively low. Sometimes people will use a carrier gas, such as nitrogen. Notice that hydrogen is not used, typically. And there was a question, I believe, about this in the last lecture. Hydrogen is not used because it reduces the growth rate. Again, this is the reaction from silane going to solid silicon. And it involves hydrogen. If you put hydrogen into the reactor, it tends to push the reaction backwards and slow down the growth rate. Typical pressure for polydep is about 200 millitorr, maybe up to a torr. Torr is a little bit high. The low pressure here in the hundreds of millitorr range improves the film uniformity, the film thickness uniformity. And again, the temperature range is pretty tight, 575 to 650. And an important number to keep in your head, an important temperature for polysilicon is 580. That's right about the amorphous to crystalline phase transition. So if you go below 580, the film is pretty much amorphous, primarily. Although, it will contain some small polycrystalline-- or crystallites, which sometimes people call seeds. So 580 is the magic number. You go down below that, you get amorphous material as deposited. You go above that, and you get polycrystalline material. So if we go on to slide 6, in fact, there are some pictures of polycrystalline silicon. Although, this is a figure in your text in chapter 9, figure 9-32, it's actually originally taken from Ted Kamin's book on polycrystalline silicon. So I've given up here on the upper left the original reference of that work. And these are cross-section TEM micrographs under four different conditions of polysilicon put down by low pressure CVD. And all these films were deposited at 625. So here's just an example of a cross-section TEM of undoped film. It's a little hard to see. That didn't reproduce that well. But you see the very columnar grain structure. These dark bands correspond to regions where the electron beam is diffracted by the different crystalline orientations and by grain boundaries. So in an undoped film deposited at 625 in cross-section looks something like this, a very columnar grain structure with relatively small grain size. As deposited, if it's heavily phosphorus doped, again, at 625, so still a very low temperature but very heavily phosphorus doped, you see lots-- the grains are more equiaxed, larger grain size. So phosphorus is one of those dopants that tends to cause secondary grain growth. It causes the grains to grow quite large. So you get a very different film morphology. If you deposit it in situ doped, say, with phosphine. The third one down is the undoped film. So if you take film A here and heat treat it, so you anneal it afterwards at 1,000 degrees, you can see the grains are still columnar. But they've grown, so the average grain size looks larger. It's a little bit hard to tell an average grain size in cross-section. But you get that impression. But look what also has happened to the surface. As the grains grow, there's a certain amount of induced surface roughness. So the surface is much rougher than it was in the as deposited case. It's typical, of course, to anneal a polysilicon film during the source drain annealing or whatever to activate dopants. So this is more what it might look like in a gate material. And the last one is one that was in situ doped. So it's film B, but it was also annealed up at 1,000 degrees C. So look at the big difference. Both of these films deposited 625, and both annealed at 1,000 C. The real big difference is this film is phosphorus doped, very large grains. In fact, if you take a thin film course in material science, Carl Thompson's courses or whatever, you'll study the thermodynamics of this much more carefully. But basically, it turns out that the grains tend to grow laterally to about the same size as the thickness of the film. So the thickness of this film is about half micron, and the average grain size ends up being, in these large grain films, about half micron. And so it's phosphorus, the presence of the dopant really accelerates the grain growth. And you get a very different morphology film. OK, so let's go on to slide 7. Why do we care about all that? Why did we go through all that? Well, the grain structure, as it turns out, is critical in determining the properties of diffusion of dopants through polycrystalline silicon and also in determining the final resistivity of the film. Now, I don't know if anyone signed up for it, but on the clipboard, one of the topics that I had open to do a final report on is diffusion and/or dopant activation in polycrystalline silicon. I'm not going to talk about it in great detail in the course. Diffusion through polycrystalline silicon is quite different than what we've talked about diffusion in single crystal silicon because in polysilicon the dopants can diffuse along the grain boundaries. Look at this. This is an example up here labeled B called the columnar grains. The dopants, a lot of them, like arsenic, diffuse very rapidly down grain boundaries. They can also diffuse more slowly in the grains. So you can imagine that the diffusion of dopants in this cartoon labeled B here on the upper left is going to be quite different from the one labeled A. A you have more random grain structure. Looks more like a brick wall. So the diffusivity is going to be very different in this structure to get the dopants from the top down to the bottom. It's going to be quite different. Why do we care about that? Well, polysilicon is the gate material in CMOS, until we get to metal gates. Of course, metal gates are coming very quickly. But for the last 30 years or 25 years, polysilicon has been the gate material. Typically, how it's doped is it's put down in undoped state, and then it's ion plated in the near-surface region. And then you do a high-temperature anneal, typically during the source drain anneal or maybe a separate one, in which you hope that you-- try to get these dopants down from the surface all the way down to the interface with the oxide to form the gate. If don't get all the dopants down there, you're in deep trouble. You're in deep trouble because polysilicon then no longer acts metallic if it's not heavily doped. In fact, it will deplete under gate bias, and you'll end up with something called gate depletion. And it will not act as a good metallic-like electrode. So how you put the polysilicon down is very critical because it determines how the dope is diffused through it, how well they activate. So the temperature that you use is very important. And here's just an example of-- now, this is a planar view. So in this case, you're looking down on the top of the film. Again, it's transmission electron micrograph but plan view. This is about 400 nanometers thick. And these are all phosphorus-doped films. Again, I'm showing the enhancement in the growth rate of the grains with phosphorus. These are all annealed at 1,000. This one has an average doping of about 1 E20. Micrograph B, the sample has about 2 and 1/2 E20. And micrograph C is about 7 and 1/2 times 10 to the 20. You can see how large-- again, you have large grains on the order of half micron to a micron here in grain size. So phosphorus doping can really accelerate the grain growth. How about on slide 8? How about film resistivity? And besides the fact that-- so I mentioned to you we need to get the dopants that are implanted at the top of the poly down to the gate oxide interface. Otherwise, we're not going to have a good polycrystal electrode. We're going to have gate depletion, and your MOSFET won't work. But there's other requirements. We also care about the sheet resistance or the resistivity of the poly itself, particularly for some high-frequency applications. So this is a plot also took from Ted Kamin's book of resistive now in ohm's centimeter as a function of dopant concentration in polysilicon. These films were all ion implanted with these dopants and then annealed at 1,000 degrees C. So it gives you a rough idea of what happens if you were to do a reasonable job of ion planting and annealing. And you could see the boron and the arsenic and phosphorus, they all kind of come down here. And so as you increase the doping concentration, the resistivity drops. But all of a sudden, it sort of plateaus out. So above about 1 E20 or 2 E20, arsenic and boron-doped poly, they sort of flatten out here at a certain resistivity and a couple of hundred micron centimeter, or a couple thousand. I'm sorry. So this is around 2,000 micron centimeter, which is still pretty high resistivity. If you use phosphorous, you're a little better shape. You can continue getting the resistive down as you-- and it plateaus somewhere above 5 10 to the 20. It plateaus around 400 microohm centimeter. So if you need to get the lowest resistivity, phosphorus is a good dopant to use. And it activates pretty well. So the conduction mechanism, as you can imagine, if you go through Ted Kamen's book, the conduction of current in polycrystalline silicon is going to be very different from that in single crystal because one thing you're going through, if you go back, just go back one slide, you can imagine I've got a current passing through either this film or this film. There are all these grain boundaries that the carriers end up going through. The grain boundaries are regions of very imperfect crystalline quality. You can get scattering off of grain boundaries. The scattering rate can depend on the amount of dopant that segregated to the grain boundary. So there's a whole lot of information that you really need to understand if you want to understand the resistivity and diffusion of dopants in polycrystalline silicon. We don't have time in this class, but I just wanted to make you aware. And Ted Kamin's book is a good reference. He's got a couple of chapters that topic if you're interested. OK, so that's polycrystalline silicon, and again, it's commonly deposited pretty much every fab just by simple low-pressure CVD. On slide 9 now, I want to go on to plasma-enhanced CVD and just show you some examples of those reactors and how it works. And again, the acronym here is PECVD. The idea, why do I want to use a plasma? Why would I want to use a plasma? Well, we want to enter-- we want to give to the system some kind of nonthermal energy, which is the plasma energy in order to enhance the process. So it will go faster at lower temperatures. So we get enhanced DEP rate at low temperatures. And this is a typical reactor, if you want a canonical plasma-- a simple plasma reactor. Actually, it's got an RF input here. So there are two electrodes, one at the top and one at the bottom. The wafers sit on one of the electrodes. There is a plasma here in between, and this plasma consists of electrons, ionized molecules. They'll have neutral species, neutral molecules, neutral and ionized fragments of broken up molecules, excited molecules and free radicals. So there's a lot of things present in the plasma that are not there if I'm just flowing silane. If I'm flowing saline and the plasma is off, what do I have in the reactor? I have silane, OK? But if I'm flowing silane and the plasma is on, I can break up this molecule into all different subcomponents, some of which are highly reactive. And therefore, the deposition rate at the surface will be dramatically impacted by the presence of this plasma. So here's sort of a canonical plasma system, and there is a heater here that would typically-- because you need to do this at a finite temperature that would heat the wafers. OK, so free radicals I mentioned, if you go on to slide 10, they're really critical. Free radicals are electrically-neutral species that have incomplete bonding. They're not naturally found in nature in this free state. They're created because they're in a plasma, and you have energy stripping off atoms from these molecules. And therefore, they are extremely reactive. And so the deposition rate can be dramatically enhanced. So for example, here's an example of a free radical, SiO. Typically, to be electrically neutral this would be SiO2 or SiH3 instead of SiH4. So you have an extra unsatisfied bond here on the silicon. So it's going to be very reactive. Fluorine, in and of itself, typically, fluorine is present as F2. But you can break it up with a plasma to form a lot of free radical just free flowing, which is very reactive. So the net result from this breaking up-- this fragmentation, the free radicals and the ion bombardment is that the surface processes occur at much lower temperatures. So for dielectrics, you can do PECVD typically between 200 and 350 and get very high deposition rates. This is particularly important if you're doing a backend process. So backend meaning you have some metal on the wafer. You can't take it above 350, for example. Well, if you want to deposit LTO, you have to go to 400 to get any kind of reasonable DEP rate. It's going to be difficult. So people can use, in that case, can use plasma-enhanced CVD to get the DEP rate up even if you can't use high temperatures. So that's a very common application of PECVDs in the back end. If we go to slide 11, actually, this is a picture of-- an example of a PECVD deposition tool. There's one here in the microsystems technology lab in building 39 here at MIT. It's made by a company called Novellus, and it's called the Concept One. And on the bottom of the slide, I put down the URL where you can go get more information about that particular piece of equipment and how it works. This is a photograph of it on the left, and it's set up right now processing 6-inch wafers. Look at the interesting thing about it. Look at the DEP rate for oxide. At low temperature, 350, 300, it can put oxide down at a micron per minute. That's an absolutely phenomenal growth rate when you think about it. Yeah, question. AUDIENCE: [INAUDIBLE]. JUDY HOYT: Well, if you need only a couple thousand angstroms, you don't want to use that. But you can slow down the DEP rate as well. But the kind of people who will be using it at this DEP rate might be somebody who wants to put down 10 microns. Maybe they're making a MEMS device, so they need 10 microns. To do that by any thermal technique, you'd be there running the reactor for days. And you'd be going through cylinder after cylinder of silane. And it would be totally ridiculous. So for people who need the thick films, this is the way to go. You can back off on the DEP rate. This is the highest DEP rate. You can go slower than that, and people do do that. It does more than oxide. It can deposit nitride, oxynitride, and other films. It's an RF plasma. It has an interesting thing. It has a multiple-station processing sequence. It cycles each wafer through different deposition chambers or stations to get an averaging effect so that you get better uniformity. In any given tool, you often find, if it has multiple stations, say, it has five slots, slot one and slot five never have exactly the same DEP rate, and it's always a problem. So you put five wafers. In wafer 1 and 5 come out differently. This is a very brute force obvious way to do it. But what they do is instead you take one wafer, and you cycle it through all five slots. And so one wafer sees a little bit of all five and kind of averages out the nonuniformity. Now, with a robotic control, you can do that kind of thing. So anyway, this is the Concept One, typically used more for backend processing. You don't see too many people using this in the frontend. And why is that? Why is that? Well, backend has to use it because typically you need very low temperatures. But if you go to slide 12, the characteristics of plasma-enhanced CVD-deposited films, like oxide or silicon nitride, they can be quite different from those that are deposited at higher temperatures by LPCVD. For example, very different stress, very different amounts of stress in PECVD and LPCVD films. Hydrogen content, in particular-- make a note-- PECVD films tend to be loaded with hydrogen, a lot. Could be like 10% hydrogen. They're just stuffed with hydrogen. And that's because a lot of these precursors, if you go back, look at some of the precursors you might be using, silane. Well, a lot of those hydrogens get ripped off during the plasma process, and you're growing so fast they can get buried in the film. So hydrogen incorporation, just by stuffing it in the film is kind of common in any low-temperature plasma process. Also, the temperature is low enough, hydrogen gets in the film and it just sticks. It doesn't evolve. At higher temperatures, like LPCVD, the sticking coefficient of hydrogen on the surface is very low. So you don't get so much hydrogen. Hydrogen would be less than a percent, maybe half a percent or less in an LPCVD film, oxide film or nitride film. Whereas, in a plasma-enhanced film it could be 10%, have a dramatic effect on the properties. And the stoichiometry of the film, it will not necessarily be-- LTO is not necessarily exactly SiO2. It could be SiOx. x is slightly different from 2, again, because you're not growing it thermally. You're depositing it. So these differences between PECVD and LPCVD can affect properties like the film adhesion, the presence of pinholes. Plasma-enhanced films tend to have a little more tendency to have pinholes because, again, as was mentioned, they're going down really fast. Surface roughness can be different. A very important thing is the dielectric constant. Again, because the presence of different impurities will be quite different for a PECVD. So the optical property-- the index of refraction can be dramatically different because of stoichiometric differences or because of hydrogen. Hydrogen can induce vibrational modes, which can absorb light. So dielectrics put down by PECVD can have much different light absorption characteristics. And finally, the important thing is etch rate the etch rate of a PECVD-deposited film will be quite different from that of an LPCVD film, which will be quite different from that of a thermally-deposited film. So depending on how you put it down, you need to know what the etch rate of oxide is in Hf or whatever you need to test for that. OK, so that's sort of what I wanted to say at this point about chemical vapor deposition. If we go on to slide 13, I want to move on to another topic, which is called physical vapor deposition that's discussed in your text. As the name implies, it's pretty obvious, PVD uses physical processes. It's not just chemistry to deposit films in the gas phase and to get them onto the wafer. The most common and the most classic physical process, and maybe some of you are familiar with this, is thermal evaporation. This is pretty much totally physical process. There's not a whole lot of chemistry. You're not flowing in gases or doing any chemistry. Here's an example of a bell jar, a large vacuum chamber. You heat the source material. There's a source material here on a heater that is then made molten, so it's liquefied. And it literally evaporates off. It has a finite vapor pressure. It evaporates into the vacuum. The vacuum pressure is low enough, less than 10 to the minus 5 torr so that the mean-free path is quite long. So it literally just is like a line-of-sight spraying paint, if you will, where the paint is some atomic species onto the surface of these wafers. And so this is a very common physical process. Not a whole lot of chemistry involved in thermal evaporation. So what for thin-film evaporation, what are some of the properties? Well, it's mostly what we call line-of-sight deposition because the pressure is low. So there's not a whole lot of collisions. Let's say I'm evaporating aluminum. As this aluminum comes off, it has a certain flux in all directions. And pretty much, the aluminum atoms find their way straight to the wafer, depending, of course, on the pressure. If the pressure is low enough, they can get straight to the wafer with very few collisions. The DEP rate, the deposition rate is really pretty much determined by the emitted flux and the geometry of the target. So it's much less complicated in some ways to calculate or to model it than it is in CVD. In CVD, the DEP rate-- well, we had that model as a function of temperature. But it's very hard to get an absolute DEP rate model. You always have to do fitting in CVD. This physical deposition is quite a bit simpler. This evaporation source, what is it? Well, usually it's a little cup called a crucible that's heated, that holds the hot the molten liquid. And there's two ways of modeling it. Sometimes, if it's small enough, depending on the geometry, it can be modeled as a point source, just like you have a point in space that's emitting aluminum atoms or whatever you're trying to evaporate in all directions. Sometimes, the cup is large enough, then you have to actually model it as a finite source, a small-area surface source. This is typically more applicable to evaporation systems. And I should mention, the heater here is just shown as this little box underneath it. There are different types of heaters. One is a very, very simple resistor. It's just a piece of wire, of tungsten wire that's wrapped up into a filament like you would have in a light bulb. And you pass a current through this wire, and it gets really hot. And that will heat the aluminum or whatever it is you're trying to evaporate. You can only get so hot with resistive heaters, though. So there's another way of heating up the material in the crucible, and that's to take an electron beam and to shoot the electron beam and to deposit energy by the electron beam into this solid here and liquefy the aluminum or whatever. So that's called e-beam evaporation, or electron beam evaporation. Two different methods of creating-- either case, you're creating molten material that then flashes off and creates an atomic flux. And that flux hits the surface of the wafer and is deposited. So if we go on to slide 15, this is a picture I took directly out of your text. And if you sit down with it and spend some time, you'll realize this is primarily, really, just geometry. So there are two cases, case A on the left, case B on the right. Case A is for what we call a point source, and case B is for a small planar surface source of a finite extent. So if we just look for a minute at the point source, this is where the source is located. Imagine it's some distance away, h, some perpendicular distance. H, there is a wafer holder on which you can put wafers. And let's say you have your wafer out here on this wafer holder at a distance L from this point from the center. This makes a right triangle. So the line H, L, and R three together make a triangle. For any of these deposition processes what really counts is the perpendicular flux. So it'd be the flux perpendicular to the surface. So you need to usually project this flux onto the surface, and that's where you end up with a cosine theta kind of dependence of the deposition rate. So anyway, we have this point source. It has a certain flux that it's evaporating, that it's emitting. And that is the rate of evaporation, or evap, divided by the solid angle, omega times r squared, where r is the distance. So if I take my point source and I just move it further away, the flux at that surface goes down like 1 over r squared. So you can adjust the evaporation rate by taking the wafers and moving them further or closer to the point source. You can adjust the flux, I'm sorry, with the 1 over r squared. This omega is the solid angle over which the source emits. It's equal to 4 pi if it's emitting in all directions. And in this deposition rate term here on the left is this equation for the velocity, the deposition rate. n is the density of the material being deposited. So here's a very simple formula to calculate the deposition rate of your reactor. It's the evaporation rate, which you'll have to calculate-- it depends on the temperature and all that-- divided by the solid angle times the density over r squared. And there's a cosine theta, theta sub k here. So theta k here is this angle right here. So as I move out, if I take a wafer and move it from right here and I move it along this line, the deposition rate is going to go down according to this cosine function. A small planar surface source is a slightly different. There are two angles involved here. There is this angle here, theta i. That's the angle between the perpendicular of the planar source. So this planar source is like my hand. It has some angle that it's making, theta i to the point of evaporation. And there's also the cosine theta k. So the key point is this outward flux, FK, from a point source p is independent of theta i because theta i really equals theta k in this case. So it's independent of-- it's emitting-- a point source emits the same around a sphere in all directions. But the outward flux from a small area source, like from my hand, it depends on some variable. It depends in some way cosine theta i to the n, where n is going to be a variable you end up fitting. Let's give-- I'll maybe give you an example to make that a little bit clearer on slide 16. This is a plot of exactly that structure. Here, I have a substrate, and I'm looking at the evaporation, the evaporation rate or the flux-- yeah, the surface evaporation rate as a function of l over h. So again, l is the distance that you go out from being directly under the source, and h is the distance of the substrate holder from the source. And this r, l, and h makes this right triangle. So as a function of l over h, you see the DEP rate can be calculated just according to these two equations. So when l equals h here, you can calculate the relative DEP rate compared to the EP rate directly perpendicular from the source. It's down by about-- it's like 30% of what it would be the DEP rate right at this point. So you get an idea of what the uniformity will be along this surface. And there's different curves. The dashed line here is for the surface source, and the point source is the solid line. So they have slightly different dependence on those parameters. So if we go on to slide 17, as we mentioned, the outward flux from a point source, p, is independent of this theta i. Whereas, from a small area source, it goes like cosine theta i. And in fact, if you look at this-- I think it becomes more intuitive if you just look at this diagram on slide 17. Look at diagram A. I have a point source right at this point. And I'm assuming that source has uniform-- that it's isotropic. It's the same in all directions-- emission from a point source. So each line, the length of a line is meant to represent the flux. So the flux is the same in all directions. So this is a very spherically uniform isotropic source. That would be like a point source. If you look at number B here, this is what they call ideal cosine emission from a small planar surface source. So imagine right here at this point in space I have a planar surface source. The flux is largest straight above because it's the longest arrow. And the flux then goes down. As you go from here to here, it goes down like a cosine theta, where n equals 1. So the flux is going down. And they're also, in c, there are nonideal, more anisotropic emission from a different type of small planar source. This is cosine theta to the n, in general, where n is some number. And so you can see what this means is you're getting much higher flux going straight above the source than you are going as you make an angle with respect to the source. So it really depends on how the source is set up. Why do we care about this? Well, we care because, depending on the type of source, we have to figure out how to place the wafers in the evaporator so you get the most uniform thickness across any given wafer or from wafer to wafer. So there are two different places you can place in common-- in evaporators. So if you have a point source, something that behaves like an ideal point source, you typically put it right in the center of a sphere. So you would put the evaporation source of platinum or aluminum right here in the center. Then you get a uniform flux in all directions, and you put the wafers along the circumference facing that point source. So use a spherical wafer holder. If you have a small surface source, you usually put the source on the inside of the sphere, right here. And you put the wafers, again, on the surface of the sphere. So putting it here tends to compensate for the cosine theta at the end because look what happens. This is my source right here. I have the highest flux, according to this diagram, I have highest flux straight in front of me. That wafer is the furthest away. So I have a high flux. It's the furthest distance. That 1 over r squared dependence will tend to compensate for that higher flux. And so you get the same DEP rate on this wafer as you do on this wafer. This wafer has a lower flux, right? It's going down by cosine theta. But on the other hand, it's also closer. So the 1 over r squared dependence will cancel that out. So this is typically where you want to put it for a surface source in an evaporator. OK, a little bit more on evaporation, slide 18. This is an interesting plot. This shows the vapor pressure in torr as a function of temperature for a number of different elements. Remember, if I go back to my flux equation or my deposition rate, going back to slide 15 for a minute, the flux was directly proportional to the evaporation rate over something times r squared. So I need to know what the evaporation rate is in order to calculate the DEP rate on the wafer. So again, going back to slide 18, this tells you, here, I have a formula here that you can calculate the evaporation rate here, depending on this vapor pressure, p sub e at the evaporation temperature. So the evaporation rate is naturally proportional to the vapor pressure at the evaporation temperature. So what you can see is you just need to-- depending on the element, depending on whether you're trying to evaporate indium-- well, indium you can do that at pretty low temperatures. Look at its vapor pressure. It skyrockets above about 550 centigrade. If you're trying to evaporate some refractory metal, for instance, molybdenum or tungsten, good luck with a resistance heater. You'd have to use an electron beam. You have to go to very high temperatures to get any kind of reasonable vapor pressure here. But just about-- as you can design a heater, an electron beam or a resistance heater, then you can evaporate it. That's the name of the game with evaporation. You can evaporate just about any element. So in a thin films lab, you can do just about anything. The DEP rate of some elements is going to be very slow because the evaporation rate is low. Its vapor pressure may be low. One thing-- a good point, though, is it's very difficult to evaporate alloys or compounds. As you can see that, depending on the material between aluminum and nickel or chromium and nickel or whatever, at a given temperature, their evaporation, their vapor pressures vary by orders of magnitude, orders of magnitude. And so if you had put in your little crucible a little bit of one element, a little bit of another element, they're going to come off at very different rates. So trying to evaporate a compound is tricky. You probably need two crucibles with their temperatures controlled electronically very carefully. So that gets a little tricky. Probably the most important reason why evaporation is not used in fabrication, in semiconductor manufacturing anymore is this DEP coverage is very poor. It's pretty much line-of-sight. So if you're trying to fill a trench, for example, in A here, and let's say you uniformly want to deposit the material all along the trench, because of shadowing effects, as this aluminum or whatever comes down, it's pretty much just going to deposit right here. It's not going to hit the sidewalls. So if you need this conformal deposition, you're not going to be able to get it by-- you're going to need to use sputtering. You're not be able to get it by evaporation. There are other reasons it's rarely used, not just the line-of-sight, but the evaporator itself is very hot. When you have a little hot body, a hot metal body like that, any impurities on the metal body can end up being put onto your wafer. So it's a little bit tricky to control impurities. And the electron beam itself, when it strikes things, remember, if you have a high-energetic electron striking something, you can create x-rays. You can create photons. That's how x-rays are actually created. So there is an inadvertent creation of energetic material, energetic things, like x-rays, which can then hit the wafer and interact with the wafer, potentially causing oxide damage and things like that. So evaporation is used in MEMS or maybe in research these days. But you don't find it too much in CMOS manufacturing. What you do find in MOS manufacturing fabs today is sputter deposition. So we're going to spend more time talking about this in the PVD area. What is sputter DEP? Well, this uses a plasma. It's different from plasma-enhanced CVD. So it's not so much chemical process. You use a plasma, but this time, you're physically using the plasma to hit a target and sputter off or dislodge atoms, dislodge atoms. And they will then find their way onto the wafer to form the film. That's what sputtering is all about. Sputtering uses higher pressures than evaporation. Evaporation, remember, I said you need to have 10 to the minus 5 at least, probably 10 to the minus 6 torr. That way, that you can evaporate and just not hit anything as that atom goes to your wafer. Higher pressures here, we're talking about 1 to 100 millitorr. So that's 1 to 10 to the minus 3 torr, something in that range. Sputtering is typically better at depositing an alloy because the sputter rates vary a little bit with the material, with the element, but not so much-- this is an example on slide 19, a very elementary DC sputter deposition. I have some sputtering gas that I put in the inlet. And argon is very commonly used. And why do we use argon? It's inert, right? It's a noble gas. It's big. It's heavy. It's fat. Given enough kinetic energy, it can sputter off something off your target, your electrode, whatever the materials you're trying to get off. So argon is commonly used. You create this argon glow discharge of this plasma. The argon is accelerated. It's positive, so the argon accelerated here towards this electrode. It knocks off the aluminum or whatever, which finds its way onto the wafer. So this is a common DC sputtering system, which you would use to deposit metals. So let's look a little more carefully on slide 20 at what the plasma consists of. So this, again, now I've turned the plasma on its side. Here, the negative electrode has a voltage v sub c, the cathode, as it's called. The cathode is the target. So this material would be aluminum or platinum or whatever it is you're trying to sputter. On the other side, you have the anode, which in a DC system would be grounded. And you put your wafers on the grounded-- or on the anode. The plasma itself contains-- pretty much, it's electrically overall neutral. So it has equal numbers of positive argon ions. So the argon ends up being positively ionized, and electrons that are in the plasma, as well as some neutral argon atoms. And if you take plasma physics class, it'll explain in more detail. But for now, you can take this as a given that if you plot the voltage here in the plasma as a function of distance from the cathode, most of the voltage drop from here to here-- the plasma has a positive bias here-- most of it occurs over the cathode sheath. So most of it occurs right here right near the cathode. So that means that the argon then can be accelerated in this voltage drop going from the center of the plasma. Here, it'll be accelerated towards the cathode, towards the aluminum target or whatever it is. And then it'll gain enough kinetic energy that it can knock off the aluminum. So on slide 21, these positive argon ions are accelerated by this voltage, this voltage drop across this cathode sheath. And they hit the target, and they sputter off, say, aluminum. These aluminums now then travel through the plasma, and they deposit on the wafers that are sitting on the anode. So what rate limits you here is the sputtering rate, and it depends on the sputtering yield, which we call y as the number of atoms of aluminum or the number of atoms of a target that come off per incident argon ion. So that's the definition of the sputter yield. So if you have a very high sputter yield, for every argon ion that comes in you'll get an aluminum off. That would be a high sputter yield. Looking at slide 22, the sputter yield y is a function of the energy. So that'll be the dependent on the bias in the plasma, right, the voltage drop. And it'll be also dependent on the mass of the ion-- argon is very commonly used. It's very massive-- and on the target material. It'll also be a function of the incident angle of the argon coming in. Why the sputter yield doesn't vary that much, not nearly as much as the vapor pressure. Vapor pressure varies by 5 or 10 orders of magnitude among the elements. That's not the case with sputter yield. So sputtering is pretty good for depositing alloys. This is just a schematic on slide 22 of what's going on. Let's say I have an aluminum target here, and here's my wafer on the surface. And this is the cathode sheath of the dark space. The aluminum ions that are in the plasma are accelerated towards this negative electrode. They have enough energy that they can knock in a little-- I'm sorry, the argon accelerate it. They knock an aluminum off. It finds its way down to the wafer, where it may move around a little bit if the wafer is hot. But it pretty much will stick and deposit. There are other species, of course, around. There are electrons in the glow discharge. And depending on what's going on, there may be some other impurities as well, depending on the cleanliness of the vacuum. We'll go to slide 23. This just gives you a schematic, which is supposed to point out that a sputtering target itself is generally quite large. If you go to buy a target for aluminum, it can be big, a couple of feet on a side. It will be much larger than the wafer. And so because it's so large compared to the wafer, it gives you a wide range of arrival angles in contrast to a point source. So on the left, you can imagine if I had a point source, the arrival angle here is somewhat limited. But if I have a source that's really much bigger than the wafer, I get a range of arrival angles. And at any given point, I can get pretty good uniformity, reasonably low shadowing effects. In the evaporator, there were a lot of-- when we talked about thermal evaporation, if you have a step that you're evaporating into or a hole, you can get shadowing effects. Sputtering doesn't suffer nearly from those kind of shadowing effects. It's not really considered line of sight. OK, so let's look at slide 24. I just to copied this page from this text or this book called The User's Guide to Vacuum Technology by O'Hanlon. I just wanted to remind people of an important concept where you're talking about these vacuum processes, like evaporation and sputtering. The concept of a mean-free path, just to remind you what this is if you haven't seen it before. This mean-free path here, a lambda, of a particle from the kinetic gas theory, it depends on temperature, directly proportional to the temperature. And it's inversely proportional to the pressure. So for example, in just a typical sputtering system, the pressure, let's say, maybe is 5 millitorr. So this gives us a mean-free path on the order of a centimeter. So you can think of these argon atoms or ions moving around or the aluminum moving around in that five militorr system with about every centimeter it has a collision. So that's a lot less than the physical path length because a sputtering system might be this big. So you have your target up here, your wafers down here. It has to go a lot of centimeters. So that atom sort of jiggles around, and it has a lot of collisions as it finally gets to come to rest on the surface. Contrast to an evaporation, could be just as long or a longer distance. But evaporation is line-of-sight. Bingo. Goes right from here to here. I'm at 10 to the minus 6 torr there with no collisions. So this mean-free path is also what determines the conformality of the deposition in sputtering and how it's different from evaporation. Looking at slide number 25 here, contrasting two different cases. On the left, case A is for isotropic flux arrival. Now, I'm considering-- I have my surface of a wafer, and I've got a flux of things coming in from the sputtering. That's not from evaporation, but it's coming in at a certain angular distribution in the sputtering tool. And so if n equals 1, I'll get a cosine theta arrival angle distribution depending on the angle of this source with respect to the normal. If have an anisotropic flux, a rival flux, n will go up. n can be much greater than 1, maybe 10 or something like that. And if n is very large, it means most of the flux is coming straight down. Not much but it's coming from wide angles. So typically, for sputtering, we also use a cosine theta at the end. And it's the normal component of the flux. So it's the component in the normal direction that strikes the surface and determines the DEP rate. So the size and the type of source, the geometry of the overall sputtering system, the pressure and, therefore, the collisions and the gas phase, all of those are important in determining the arrival angle distribution, in determining what this flux looks like as a function of theta. So that says a little bit about DC sputtering. Now, you might say, well, that's fine if the target electrode is a conductor, like platinum or aluminum or whatever. What if I want to sputter a dielectric material? It doesn't conduct DC current. You're going to have some problems. Well, what people do is then you use an RF power source because you can transmit power through insulators in an AC sense by using RF. So it looks very much or somewhat similar to a DC system, except now you have an RF generator and a matching network. And you put your target or your electrode up here. And you notice, we'll talk about why this is, but it's a smaller area than the electrode that holds the wafers. Again, you still bring in argon, and it's accelerated towards this target. And you can sputter dielectrics. SiO2 can be sputtered in this way. Slide 27 shows an example of the steady state-- now, this is steady state voltage distribution in an RF sputtering system. And so it's a plot on the y-axis here is voltage as a function of distance. And here on the left is meant to be the target. This could be SiO2 if you're sputtering that. And on the right is the electrode where you put the wafer. And there are two different plots here. Well, first of all, because there's slower mobility of the ions, the ions are more massive than the electrons, the plasma always tends to bias positively with respect to the two electrodes. So the plasma tends to be positively biased. The question is, what does the voltage drop look like as a function of the area in these two electrodes? So it turns out, when the area of the electrodes is not equal, the field has to be higher at the smaller electrode. So this is my small-- let's say, if this is the smaller electrode, the field is higher because there's a higher current density. So to maintain overall current continuity, it turns out that this voltage drop here, v1, will be related to v2 inversely to the areas. So if I have equal area electrodes, I have the solid line. So there's sort of a drop. The voltage drop on the left, and the right is about the same. If I make the area-- this electrode where the wafer sits much larger, so by making the target electrode smaller, in other words, there's a larger voltage drop near the target. And so all the sputtering tends to occur near the target, not during your wafer, not to your wafer. So just by imbalancing the size of the electrode where you put the wafer and the electrode where you do the sputtering, you can cause most of the voltage drop to be on this side with a smaller electrode. And then that way you ensure that you get sputtering, not of your wafer, but sputtering of the target material. And going on to slide 28, the other thing you can do, you can have a separate RF bias, which allows you to control the voltage drop on this side. So you can do a cleaning step. So at the very beginning of sputtering, you often want to sputter a little bit off the wafer surface. Let's say you have a little bit of deposition, or you can also do a bias sputter deposition. So you can actually have the wafer be at some finite bias, and this ends up giving you more conformal deposition because you can get a more highly directional sputtering flux. So there are a couple different methods in using sputtering where you can control the arrival angle with bias sputter deposition. OK, so the final technique that I want to talk about today for PVD is this ionized sputter deposition or high-density plasma sputtering. So far, let's just make it clear, what we've been talking about is the argon has been ionized. It's been accelerated towards the target. But the atoms coming off are pretty much neutral, the aluminum coming off or the platinum or whatever. But in some systems, you can arrange it, that the depositing atoms themselves are ionized, so not just the sputtering material. And so what you do is you place-- in addition to the target and the substrate that are biased with some kind of RF, you can put an RF coil. So now we have two RF sources. And you can put an RF coil around the plasma, and that's going to induce collisions in the plasma, which is going to tend to ionize the aluminum. So the aluminum comes off neutral, but you can ionize it with this inductively-coupled antenna, essentially? So now I have a different situation. I'm using the argon ions to be accelerated towards the target, get the aluminum off. And then I ionize the aluminum as it comes down. Why would I want to do that? Well, if I have now ions coming down, and if I bias the substrate, I can get a much narrower arrival angle distribution because they're being accelerated towards the substrate. Before, they were just sort of coming down and sort of collisions, mean-free path. And the aluminum was just hitting the surface. But now you can actually accelerate them towards the substrate. Maybe you can use this to fill a deep contact hole. Here's an example. On slide 29, you have a deep hole. And look at the-- schematically, on the right, in B, the arrival angle distribution is very narrow. They're pretty much all coming straight down, compared to here where they're coming in at different directions. So you can then adjust in this high-density plasma or inductively-coupled-- people sometimes call this ICP sputtering-- you have another knob that you can turn, literally. So what are some common PVD films, and how are they put down? That's listed on this chart took from your text, here on slide 30. Aluminum put down in a number of different ways, but sputter deposition is really the most common. The standard deposition of aluminum is to hold the wafers near to room temperature, pretty close to room temperature. There are some special sputter deposition systems where we'll put it down in about 400 to 500, so in other words, hot aluminum. And this is if you need to reflow the aluminum for better step coverage. You don't see aluminum deposited by chemical vapor deposition. That's extremely rare. It's usually sputtered. Another very common material you'll find in fabs is Ti or Ti-tungsten. This is also sputter deposited. Again, CVD can be very difficult. So people will use sputtering. Tungsten, tungsten is used to fill plugs. It's often put down by low-pressure CVD using tungsten WF 6, tungsten hexafluoride. Ti-silicide can be put down in a couple of different ways. It's usually sputtered. The titanium is usually sputtered initially, and then it's reacted. And we'll talk a little bit more in I think one of the last couple lectures on how silicides are formed. Ti-nitride is also sputtered but usually in a nitrogen plasma. Instead of using an inert gas like argon, they use nitrogen, which then can, not only knock the titanium off, but then react with it and then go down as Ti-nitride. So those are some common films you'll find in the fabs. So let me just summarize that the LPCVD uses low pressures. And why do we do that? We want to increase the diffusion rate through the boundary layer so you can get more uniform depth across the wafer, even when the wafers are very closely packed. So the nice thing about LPCVD tubes is you don't have to worry about how the gas flows. You just have to get a very uniform temperature. Polysilicon is the most commonly deposited material by LPCVD. It's the gate in CMOS. We've barely scratched the surface of poly. I think it's a reasonably good topic to do if you want to do on one of your reports. You need to optimize the temperature and the pressure and the exact conditions to get the right grain structure so you can get diffusion of dopants properly and good electrical activation in the grains. This plasma enhancement can be used to enhance the DEP rate at low temperatures, especially for dielectrics. We talked about the Novellus Concept One, which is an example. There are many, many examples of PECVD systems that are used in fabs. Typically, use plasmas when you have a limited thermal budget or if you happen to be doing MEMS or you have something where you really need thick layers and doing it thermally just is not practical. The most commonly used PVD technique is sputtering, either DC for metals or RF for dielectrics. In contrast, to DEP evaporation, you can get very good step coverage and filling of high aspect ratio holes. So far, we've talked very qualitatively. What I want to go through next time is some actual models that people have developed, methodologies for modeling these processes, both CVD and particularly sputtering of PVD. So that's all I have. If you came in late, remind you, the homework number five went out. And there is a problem on the back, so don't miss part 3B. And if you haven't signed up, make sure-- today is the last day to sign up for a topic. So I've got the clipboard there in the back. |
MIT_6774_Physics_of_Microfabrication_Front_End_Processing_Fall_2004 | 21_Etching_Poly_Gate_Etching_Stringers_Modeling_of_Etching.txt | JUD HOYT: We have some administrative things today. First of all, there are two handouts-- 34 and 35. We've got the lecture notes. And I'm handing out the solutions to homework number 4. And homework number 4 is going back. So you can pick up homework number 4. It's in the orange folder in the back there. And some reminders or announcements-- these will appear on the website today. So just to remind you, your written reports are due here on Thursday, December 9, in class. I have come up with a schedule for the presenters, for the people doing oral presentations, that's shown here. We have two lecture periods that we've dedicated to oral presentations-- the last two lectures of this term. So on December 7, we have four speakers. And these are the four speakers I have chosen for December 7. And your presentation time should be-- of your actual presentation-- should be about 16 minutes. And then beyond that, we'll leave three minutes for questions and answers period. So it's a short Q&A. And that's for each talk. So I'll make each talk about 20 minutes. And we'll have four people in one period. So that should work out fine. And the same thing on December 9. These are the four speakers that I have chosen for December 9 and pretty much approved all the topics. If you're not sure your topics are approved, go on the website. And I have an indication whether it's been approved or changed. If you need help-- you're going to be graded. For those of you who are doing the oral presentations, you're graded on two things-- same as the written, but a slightly different-- is you're graded on the presentation itself, which includes your oral presentation as well as your slides. So you need to provide handouts, the same way that I give out handouts every lecture-- not as long because you're only speaking for 16 minutes. But if you need any help making Xerox copies-- you don't have access to a copier-- you don't need to pay to make those Xerox copies. My assistant, if you send her the PDF file, or the PowerPoint file preferably, or if you bring her a hard copy of your presentation, she'll make enough Xerox copies for the rest of the class. You need to give her some time, though, to do that. So make sure you get her that the day before, so she has time. And her hours are 8:00 AM to 6:00 PM-- 2:00 PM, 8:00 to 2:00 so. Make sure you get her that the day before if you want help making Xerox copies. And the second component of your grade, of course, is on the technical content. So one part of your grade is on the presentation. And that includes the quality of your graphics and the quality of your handouts. And the second part is on the actual technical content, the depth, and to see that you read adequate number of references and that you understand the material and that you can present it. Same thing for written reports-- the only difference is you won't be graded on your speaking because you won't be speaking. But for the written report, you'll be graded both on technical content, as well as on the presentation. How nicely did you lay out the report? How well is it sectioned? Are the references clear and easy to read? Do you have good graphs? Or are the graphs impossible to read? Things like that-- so both aspects will be part of it. And all of this information will be posted on the web. But this gives you an idea of when you need to prepare yourself. So yeah, this is where we are right now on November 23. The Thanksgiving holidays coming up on Thursday. We won't have a class Thursday. Today, we're finishing up chapter 10 on etching. And then we'll move on. We have two more formal lectures that I'll be giving. They'll be one on silicides and contacts and novel gate materials, and then on growth and processing of strained silicon and silicon germanium on December 2. And then you folks will start speaking on the 7th and finish up on the 9th. And your written reports, again, are due in class on that final Thursday, December 9. OK. Anybody have any questions or anything about the final report or the schedule? Send me an email if you have any questions about it. OK. So let's go on to today's notes. Each chapter now, you notice we're going through a little more rapidly as we finish up. This will be the last formal chapter of reading that you'll have in this class. There'll be a little bit on silicides you'll have to read. And then the last lecture, there is nothing written up on it. So you just come to lecture. So hopefully, you're finishing up reading chapter 10 on etching. What I'm showing here on the first slide of handout number 34 is kind of a summary of what we talked about last time, as far as plasma etching and the various mechanisms. There are certain characteristics which we said were important-- for example, the pressure of the chamber in which you're etching, the energy of the species. And these end up affecting things like selectivity and isotropy. At the top of the list of the type of etching are the most physical etching processes here, like sputter etching or ion beam milling. It's pretty much purely physical. There's not a lot of chemical reactions going on. If you go to the very bottom type of etching, which is wet chemical etching, it's entirely wet. There is no physical bombardment. And everything in between has a certain component of both chemical processes and physical processes going on. And these arrows point to the direction in which the quantity increases. So anisotropy, which is-- so the etch will be more vertical than it is horizontal-- increases as you go from wet chemical, which is very isotropic-- or if you go to sputter etching, it's very anisotropic. So anisotropy increases in this direction. Selectivity is very poor, or very low, for sputter etching. You can sputter anything. And then the sputter rates are not very different among the elements. Selectivity increases going down. So reactive ion etching is less selective. Plasma etching is more selective. And wet chemical etching tends to be the most selective. Selectivity increases in this direction. In general, the energy of the process increases going from the bottom to the top. So there's more energetics involved in sputter etching. The particles can come down with hundreds of electron volts and knock off atoms on the surface. So these processes tend to be more energetic. And the pressure of the system, here, tends to be lowest here, and then increasing as you go from here-- this type of sputter etching-- to, say, plasma etching. So that's just what we talked about last time by way of mechanisms. Today, I wanted to give some specific examples. And I want to emphasize gate etching issues. And I'll talk a little bit about modeling, the mechanics of how people model etch processes. On slide 2, this is a chart-- or it's half of a chart that I took directly from chapter 10 in your text. And what it shows is a series of common materials, and some typical etchants that are used, and then some comments about each of these etchants. For example, a very common way of etching polysilicon in a generic way is to use SF6 or CF4. What are the characteristics of that type of etch? Well, it tends to be isotropic or near isotropic and not very good selectivity over SiO2. So hit once you hit the SiO2, you continue etching. So this is a general purpose etch you might use if you're trying to strip off some poly off the back of the wafer or something-- probably not something you might use if you need anisotropy or something like that. A more anisotropic etch, but also not very selective to SiO2, would be a mixture of CF4 and hydrogen, or ChF3, some of these freon types. You can mix CF4 with oxygen. I think we showed last time an example of the chemistry of this. It's isotropic generally, reasonably isotropic. But it's more selective to SiO2. Again, a lot of times, you're etching poly on top of oxide or some dielectric. And so the selectivity of the etch, of the polysilicon etch, to the SiO2 is very important. The most commonly used etchant gases today for polysilicon-- and we'll talk about this when we talk about gate etching-- are HBr and chlorine or some combination of HBr, chlorine, and oxygen. These tend to be very anisotropic. So you can get very well-defined sidewalls. And most of them are reasonably selective compared to SiO2. With HBr and O2 combinations, you can get up to 100 to 1 selectivity, depending on the etcher. If you want to etch single crystal silicon, you use exactly the same etchants you used for polycrystalline silicon. In general, there's not a big difference between the two. If you need to etch oxide, you can etch SF6. Interesting, any of the fluorinated species will help you with etching oxide. Notice that SF6 was also listed as etching poly. So that means it's not a very selective etch. And isotropic etches include a variety of the freons, like C2F6 and C3F8. They are somewhat selective over silicon. When do we etch oxide? In a Mohs process, for example, we're often etching oxide when we're opening up contact holes. So we have an interlayer dielectric of some sort, maybe 1,000 or 2,000 angstroms we need to get through it. With reasonably good control of the sidewalls, we also need to stop on the silicon. You don't want to etch away your source strains. So anisotropy and selectivity are also very important. Silicon nitride-- there are some isotropic etches, like CF4 and O2. An anisotropic CF4 and H2-- this one is selective over silicon. So it'll stop on silicon, but not over SiO2. And sometimes, you have stacks consisting of oxide and nitride. And you need to etch one of the two selectivity-- selectively. So it gets a little tricky. You need to pick the etchant and the reactor carefully. What are some other materials? So those are the more common ones. Maybe a little less common, but things that you do have to etch often in plasmas are, for example, aluminum wiring. Or aluminum contacts need to be etched. It's very common to use chlorinated species, like chlorine or CHCl3, or a mixture of chlorine and nitrogen. Chlorine itself can be somewhat isotropic. And some of these other gases are more anisotropic. Tungsten can be etched in SF6 or chlorine. So you notice a lot of the metals-- these are all metals-- are attacked by chlorine, which isn't too unusual if you know a common wet metal etchant is hydrogen chloride, HCL. So again, etching titanium or ti-nitride, these are very common etchants-- chlorine or mixtures of chlorine with hydrocarbons. Photoresist. An important thing, we need to be able to not etch or pattern so much, as you typically want to strip it. So an etching step would include an oxygen plasma, which reacts with the hydrocarbons and removes the photoresist. And the nice thing about O2 plasma is it's very selective. It doesn't etch too much of any-- it doesn't etch nitride, doesn't etch oxide. It will etch a little bit of silicon by oxidation. That is, it'll oxidize a little bit of the surface silicon. And then if you happen to do an HF dip, that'll get removed. So it has a small amount of silicon removal by an oxidation process. So again, on page three, that's the second half of the chart I took directly from chapter 10 in your textbook of some common etchants. Now, I want to go here on slide 4. I wanted to go through some etching challenges in moss frontend processing from the last five years or so. And I'll talk about a couple of them. One is, for example, we've talked about in the past DRAM, or dynamic random access memory. There are different ways of storing the charge in that memory. You need to make a capacitor of some sort on which you store charge. And that is your bit. Whether there's charge there or not charged there tells you whether you have a one or a zero. This capacitor you might think, well, let's just put a capacitor on the substrate. We all know how to make an MLS capacitor. But the problem is to make very high density DRAMs. You can't just make a capacitor flat on the silicon substrate. You can, but then it takes up too much area. So people have all different types of ways. People are thinking of ways of stacking the capacitor on top of the transistor. That's one method, of stacked capacitor topology. Another method is take the capacitor and bury it down in the substrate somewhere so you can get a smaller area for your DRAM cell. You can pack more memory on a given chip, and then you'll have much higher performance. So this is an example where you might need to etch down into a silicon substrate to make a trench prior to making a capacitor. This is an example where you have etched a trench that looks like this, lined it with oxide, and then filled it up with polysilicon to make one of the plates of the electrode. The other plate of the electrode could be the surrounding silicon material. And I took this out of an IBM journal of research and development just off the web if you want to look at that in more detail. So we need to make very high aspect ratio etches for these trenches. And we also need to care about the damage that we do to the silicon because it's going to be part of a capacitor. So we want it to have smooth surfaces and things so that we don't break down the oxide and we get a good electrical performance of the capacitor. In fact, let's see. Let's take a look here at page five. I took this from the 2003 ITRS table 73. This is just for DRAM trench capacitors. What are some of the technology requirements we're looking at in the long term here, going from 2010 all the way out to 2020 or 2018? What do people expect they might need? Well, here there are some indications here of characteristics. They have things called the trench structure, how it will be shaped-- maybe more like an upside down bottle-- the circumference of the trench, the trench surface roughening factor, the effective oxide thickness that people will be using in that trench capacitor. Just to give you an idea, in nanometers here, we're talking about several nanometers, maybe two nanometers, scaling even thinner and thinner. The trench depth in microns. This would be a 30 for a 35 femto farad capacitor, just as an example. How deep will we have to etch, reasonably deep-- somewhere between 5 to 7 microns is what people are talking about doing. Aspect ratio. So remember, we talked about aspect ratio as being an important attribute of our capabilities for etching. Well, so here is the trench depth divided by its width. We're talking about very high aspect ratios that are increasing. And they're also quite large. In 2010, which is a ways off, we're reaching aspect ratios on the order of 100, and beyond that, going even higher. And you notice beyond 100 the cells are all marked red, indicating in that range, people don't really know yet how to etch such deep aspect ratio, 100 to 1. So new technology is going to have to be developed in order to do that. They have some indications here also of what the upper electrode might be made of. It seems to be metal. The dielectric material, people are talking about using high Ks. Again, you're trying to store a fair amount of charge in a small space. You can do that with a higher dielectric constant. The bottom electrode could be a combination of silicon and metals, as well. So just to give you an idea of what kind of trenches one needs to etch. And this increase in the aspect ratio is pretty amazing in terms of what technology requirements are going to be imposed on advanced etching techniques. OK, so that's one example. If I move on to slide 6, we not only have to etch silicon into the single crystal material, but we also have to etch silicon when we're etching the gate. And in fact, etching the gate is becoming more and more complicated. This is an example on slide 6 of etching what we call a compound gate, or a gate that already has a gate stack, or already has several materials that it consists of. And I apologize this is not a very good SEM. I took this off the website, again, from that IBM article. And what it is on the very top, there is photoresist that has been etched-- that has been patterned, I should say. Then there is a layer of tungsten silicide that has been etched. Then there's a layer below it of polycrystalline silicon that's been etched down to, say, the oxide. It's down to the gate oxide. And when presented with something like this, you have to think about using two different types. After you pattern your photoresist and you develop it and you're ready to etch it, underneath you have two different types of layers. You need to etch a silicide layer in this example, which was preexisting as a uniform layer. It has to be etched. And then you need to stop on the silicon and then switch gases to etch the polysilicon underneath. Now, this gets a little tricky. So you have two different etch gases. That doesn't sound too bad. You have an edge gas to etch the tungsten silicide and another series of gases you use to etch the polysilicon. One problem, though, you can come across, for example, tungsten silicides may not be super pure. It may have some oxides incorporated into it. And these oxides won't etch if the etchant is highly selective to oxide. So if you're using a single etch gas, then it's very selective to oxide, say is what you might want to do for the poly. That same etch gas might not do very well in the tungsten silicide. And what happens when you use an etching gas that is very selective to oxide and you don't do a breakthrough, if you have any native oxide on top of that structure and you don't break through it, you'll end up with something called RIE grass. And it's called grass because it looks like grass in the SEM. On the next slide, I'll show you a picture of grass on slide 7. This is an example of an extreme case of RIE grass. And you have a flat substrate down to which you have etched. And sometimes it's called micro masking. What happened, an etchant was used to etch this gate, whatever this material is. An etchant was used that had a very high selectivity of oxide. And there were apparently small oxide inclusions at the surface of the wafer when the etch was started that didn't get fully removed. And so those small oxide areas will act like little micro masks. They were not intentionally put there, but there's just a little bit of oxide here and there. So a breakthrough etch wasn't used. As a result, if the etching you're using is highly selective to oxide, it'll never etch it. And instead, you end up with little posts which look like little blades of grass. The solution to this is, well, you can say, well, don't use an etch that's so selective to oxide. But if you want to stop on SiO2, you wouldn't want to do that. The solution is the very first part of your etch step should be a breakthrough etch where you make sure you break through all the native oxide and then you continue on down. So it gets a little tricky when you start etching multiple materials or when you have to think about the fact that there may be materials on the wafer you don't know about. Native silicon dioxide may be there, even if you didn't put it there. It grows there automatically on the wafer. So you have to remember that you do get it off the surface before you use a highly selective etch. Slide 8, what I've done is I've listed some what I call etching challenges in MLS frontend processing as they apply to gate etching and logic devices in particular. This on the left-hand side is a TEM micrograph. I think I showed this the very beginning and several times throughout the course just to give you an idea of what a device five years ago or so might have looked like. This is a fairly large technology by today's standards, but I think it's a quarter micron or something like that channel-link technology. Here is a polycrystalline silicon gate. There's a thin layer underneath, which is the gate oxide separates the gate from the channel. And on top there is a layer, which is silicide. Now, that was put there afterwards, in this case. So the gate was first etched. And then metal was put down and then was reacted with the gate and the source drain in a silicide or a salicide process. We'll talk about that next lecture, when we talk about silicide. So you didn't have to etch through this. This was reacted later on. But what do we care about? Well, in this picture, first we need to control the gate length or the critical dimension, sometimes called the CD. The gate length is defined from this point to this point. So this width or length, so to speak, of the polysilicon needs to be controlled exactly on all areas of the chip and all areas of the wafer. The sidewall profile. What do I mean by that? Well, this profile, going from the very top down here, has to be controlled. You usually want it vertical. You don't usually want it splayed out like this. You don't want it retrograded like this. You don't want it straight with a little slit at the bottom. You want it perfectly vertical and stop. That sounds kind of trivial, but it's not easy, for various reasons. Selectivity. We care about, as we're etching this polysilicon, we don't want to etch away the resist that would be masking it on top or whatever mask material we're using, if we're using a hard mask of SiO2. So selectivity to the etch material, to the masking material, is very important. That's one type of selectivity. Selectivity to the gate dielectric. Down here in this is a dielectric material, probably a high K. That's probably 1 to 2 nanometers thick. We need to stop instantly, immediately, as soon as we hit it, automatically, and not go through it, and not over etch too much. So this is a real issue, getting chemical selectivity at the same time as getting good sidewall profile. I just mentioned the fact that Native oxide is often present on the polysilicon. We have to use a breakthrough etch as the initial step of etching. Otherwise we'll have all kinds of grass problems. There can be impact of different dopants. If the gate is doped prior to etching, which it can be if you put down in situ doped polysilicon, the etch rate for boron or heavily phosphorus doped poly may be different from-- and they may have different etch characteristics than that of undoped. So you need to take that into account. And we also mentioned at the very beginning of the last lecture, gate dielectric damage due to the antenna effect. What is that was you have ions coming down, being collected by the gate. Where do they go? Well, they don't go through the field oxide because that's too thick. They get shuttled down the gate and try to-- the current tends to go through the gate oxide, but to be careful that you don't put too much current through your gate oxide and damage it. So those are some basic issues when it comes to poly gate etching. Here's an here's an example on slide 9 of a gate etcher. It's a little bit older technology now, but at the time it was designed by a company called Lamb to be dedicated to gate etching only. It's called the TCP 9400. It's relatively low pressure, low bias, high density plasma, has a mechanical clamp check. Remember, we talked about the importance of controlling the temperature of the etch. To cool the wafer, it's got helium on the backside. This is an example of a wafer that's frontend processing only, so no metals are allowed to go into the tool. And you can get an etch rate of what's called the main etch. It's pretty fast. In fact, higher than you probably need in most etches today. The height of the poly today is probably only 1,000 angstroms. And this goes at 320 nanometers per minute. So quite efficient etching for poly gates. And it can give you the kind of profile shape and selectivity that you need. If we're going to slide 10, this is just an example of a recipe. If we were to use that Lamb, TCP 9400 gate etcher-- I took this off the Stanford website. This is an actual recipe that they have published showing the individual steps that you would need to use, just to give you an idea of some of the complexity of this etch process. Now, each step-- they're numbered here 1 through 9-- corresponds to a different time sequence, a different step in the time sequence for the etch. So the very first step labeled number 1, you're adjusting the gap which is a parameter inside the tool. The pressure is being stabilized at 13 millitorr automatically by the computer. And you're setting up to flow some of your initial gases. You're flowing C2F6. And once you've stabilized after 20 seconds, you go on to the next step. OK, so the next step is step number 2. You're clamping the wafer mechanically to the chuck to get good control of the temperature. And you have certain characteristics in that step. So that lasts about 20 seconds. Step 3 is the first etch. Remember we mentioned when you're using a gas that's highly selective, you need to first start out with a breakthrough etch to get through the native oxide. Here you're etching for about ten seconds. You etch just based on time. You're flowing C2F6 all together with-- I guess that's the only thing you're flowing. And you've got RF power on the top and bottom, 250 and 40 watts. And your pressure is 13 millitorr. So this should break through any native oxide for about ten seconds. And then the next step is 4. You need to stabilize the gas flows before the main etch. The main etch is the etch that's going to etch the polysilicon. So you turn on different gases. You turn off the C2F6. You turn on the chlorine, a mixture of chlorine and oxygen. That's what we call the main etch that's going to do most of the work in getting you through the polycrystalline silicon. So you're just stabilizing at this point. You notice there's no power on the top and bottom electrodes. So you're not actually-- the plasma is not on. So you're not lighting a plasma. You're just stabilizing the flows through the chamber. And that takes a while. And this condition in this particular case, stabilization step is about 20 seconds. Just gives you time to get the old gases purged out from the prior etch, which was the breakthrough etch, and get the new ones purged in and establish your pressure of 10 millitorr. Then you actually do the main etch. So here you turn the plasma on by turning the power onto the top and bottom electrodes. So the plasma is now lit, and you're actually etching here for a time of, in this particular example, 60 seconds. So you etch by time. So it's a 60-second etch. And then you move to the last stabilization now. And you're stabilizing to get ready for what's called the over etch. So there's a breakthrough etch to get through the native oxide. There's a main etch, which does most of the work. Now, the over etch is the amount of time you need, remember, where you've pretty much hit the bottom, but you may have stringers or other things. So the over etch typically has to have even greater selectivity than the main etch. And so the over etch in this example, you notice they removed the chlorine. So the chlorine flow, which was 40 CCM, has been removed. And you just have HBR and O2. This gives you a much higher selectivity. Maybe not as good of an etch rate, but it does have very high selectivity so that you will not break through that oxide. And then you do the over etch in this step where you light the plasma for 30 seconds using just a mixture of HBR and O2. You then unclamp the wafer. The plasma goes off. After those 30 seconds, you unclamp the wafer. And then you end and you're ready to unload. So it's all computer controlled. In this example of just etching polysilicon, you have three types of etches a breakthrough a main and an over etch and stabilization steps in between. And this is an example. And then you come out and you look at your etch rate and your sidewall profile and you see how it looks. Most etches, as we said, end up leaving a sidewall polymer that's either like a hydrocarbon based, or it might be a glass based, like an oxide-based material. That is what's responsible. Remember, it's the inhibitor layer that keeps you from etching sideways. That's what gives you such perfectly vertical side walls. You must have that inhibitor layer. The question is, what do you do with that inhibitor layer? Now that it's on your wafer, every single gate has some goop, whatever it might be, sidewall junk, polymer, whatever you want to call it. You need to do a post-etch cleanup process after that because that inhibitor layer is there. If you go into a TEM or something, you can see it, what you'll see is-- or maybe even an SEM. You'll see your polysilicon gate, and you'll see some material maybe that looks like this, some maybe amorphous-looking material. Very thin. It might only be 20 angstroms, something like that. Very thin. Thickness may vary from top to bottom. Whatever. But that's an inhibitor layer. That is what gave you that sidewall. But typically it's removed by either an H up dip or H combined with piranha. If it's some kind of hydrocarbon, you need the extra piranha. These are typically not things you want to leave on there. They can introduce contaminants in subsequent steps. So you have to do a cleanup process. OK, so that's just an example of using a dedicated gate etcher, how you would etch a polysilicon gate. Let's just talk about a couple of the characteristics people often see. And if you do gate etching, this is what you will be up against. Here's an example of what's called notching at the bottom of the polysilicon gate. It's probably a little bit hard to see. This is not the greatest micrograph. These are SEM cross-sections. These line widths from here to here is normally 0.2 microns from this point to that point. So it's a polysilicon line. And this is using sort of a standard flow rate. And here's where they modified the flow rate, making a little bit higher. And if you notice, the gate looks sort of straight, and then it goes down. It kind of goes in at the bottom like a little notch. This is considered to be undesirable in many cases. You typically want the gate sidewall to go straight down. Why would you not want the gate to be notched? Well, there are some examples of what it might do. Let's say your gate did go in like this. So you have it vertical. And then it kind of-- I'm going to make it worse. I'll make the notch look really bad. So it might look like that down here and something like that. Well, the problem with notching like that is when you go to do your iron implant, depending on what angle-- I'm going to implant the source and drain. Depending on what angle and how bad the notching is, your ion implant might end up looking like this. So there might be a little region on either side where the source and drain extensions don't actually get implanted and where there is no gate. Now you say, well, I just do a high enough temperature anneal, and then this arsenic will move. And it'll probably overlap. You'll probably be OK. But what if it doesn't? You can't go on probably. And if that notch is not well controlled, you can end up with all kinds of high series resistance in that region. So people typically don't want a notch because it ends up affecting what happens to the ion implant, because remember, this poly that you're forming, this gate is the mask for your next step, which is going to be implant the source and drain extensions. So this is an example of how they got rid of the notch. This notch is not as bad or doesn't look as dramatic as the one I drew on the board. They got rid of it by just changing a few things. They changed the pressure and the flow rate. And they're able to get a more vertical type of sidewall. So if you have to do polysilicon gates, you'll find yourself twiddling a lot of the knobs-- the pressures, the flow rates, and things like that-- until you get a shape that you're happy with. Here on slide 12 is another article I took out of literature. It's an older paper. It's from Applied Physics Letters. And what they're making a point here has to do with something called gas residence time in the plasma etcher. This residence time concept is also important if you're growing thin films. So it also is important in reactors and things. What it is has to do is the amount of time here in seconds or fraction of a second. Effectively, that gas, the volume of gas, occupies the plasma etcher itself. And you can see that residence time increases. So it goes up with each one of these lines as they go to higher and higher pressure, or it decreases as they go to lower flows-- as I go to lower pressures. And as I increase the flow rate, it goes down. So you're flowing through faster. Now, why would we care about the gas residence time? Well, we do care about it because the amount of time the fresh gas is sort of resident has to do with the amount of time the gas is available to actually do etching. And if you have a very long residence time, byproducts can develop. Actually, after all, these reactions are chemical reactions. They evolve byproducts. Byproducts can then sit and develop there for a period of fractions of a second. And those byproducts can then end up affecting the etch or the sidewall polymer formation or things like that. So residence time ends up being an important parameter. The way people control it is, well, they either up the flow rate, the total gas flow rate, or they change the pressure in some ways in the tool. Here's just an example from that same paper, an example of what happens as a function of the silicon etch rate as a function of flow rate. So here is etch rate in nanometers per minute. And this is a particular silicon etch. What they're using is chlorine, very common. And this was the so-called their present etching technique they had using a high gas flow rate and a conventional etching technique. So it also, when you change the residence time, increasing it is going to change the etch rate. So in this case, a shorter gas residence time increases the etch rate. I'm sorry, increase the flow means the amount of time any volume of gas is in the reactor is shorter. And why? Because it increases the availability of neutral etchant species. So you have the unreacted etchant coming in faster. And it also reduces the concentration of the reaction byproducts, because again, these are all chemical reactions. You want for a chemical reaction to go faster, you put in the reactants faster, and you remove the byproducts rapidly, as well. So not only does it affect etch rate, but changing things like pressure also affect the shape of what you get. Here's an example on slide 14 from that same article of three different pressures. Here we're at 10 tour to try to define a polysilicon gate. I'm sorry, 10 millitorr, 1 millitorr, and 0.1. So we're going down here each case by an order of magnitude and pressure. We're using lower pressure at high flows. You notice we get more anisotropic etching and less notching, at least when using this particular chlorine polysilicon etch. So here at 10 millitorr, you get a fairly isotropic etch. It's not totally isotropic, but you notice it's etching sideways, as well as vertically. And how can I tell? Because this is my photoresist. And it's undercut on the left and right side. So the line is a lot narrower than what had been designed. There's a little bit better anisotropy at a lower pressure at one millitorr, but we still have-- now we've developed a little notch at the bottom, which is, again, that's not considered to be good. Finally, we have the lowest residence time, or the lowest pressure here, at 0.1 millitorr. This paper demonstrates very vertical sidewalls and no undercutting. So this would be considered a very good quality etch. You may be able to make one device with this etch. I'm not saying this etch won't work, but from a manufacturing point of view, how do I control-- how do I control my line width? In my critical dimension, it's not well controlled at all. So this would not be considered a manufacturable type of process. There's another consideration. So we talked a lot about-- we care about the etch rate and the shape of the sidewall because that affects how we're going to implant it and do other things. A huge consideration anytime you are etching features-- this is not just true of a polysilicon gate, but I'm giving the example for a polysilicon gate, so gate etch is the example-- is the topography. And so what's the problem? Well, highly anisotropic etching. So etching, there's only down. Only vertical. Doesn't etch sideways. It's required to achieve good control-- we just said that-- over the gate length and the shape. But unless you have a long over etch that you do at the end of the step, you can get stringers left behind. And let's just show an example here. So on the left, I'm showing where we've etched through polysilicon. There was no topography on the original part of the wafer. It was just flat. So I have photoresist and I've etched polysilicon anisotropically. Everything looks very good. Now, what's happened on the right? Well, on the right-hand side, I had some initial topography. So I had a step that looked like this. So as a result of having that step, the polysilicon overlayer, which is shown by this sort of stippled region, the polysilicon went over everything. So the polysilicon in this region is actually thicker than it is in the flat portion. So by the time I cleared the gate and I just hit this oxide right here, I have left behind a stringer because I haven't etched long enough because the polysilicon thickness was thicker in this region. And remember, I'm only etching vertically. It's not going sideways. So this stringer is considered to be undesirable because it's now an extra gate electrode, or it's an extra thing you didn't want in your circuit. How do I get rid of it? Well, you say, just keep on etching. Go to do an over etch. So even though I've finished etching the poly gate and I'm done in this region, I'm not done over here. So you keep it in and you keep banging on the top of that stringer until it finally goes down to 0. So you need a very long over etch whenever you have surface topography in order to get rid of stringers. What's the disadvantage of a long ever etch? Well, you might break through your oxide because you're still beating on that oxide. So unless you have really good selectivity, as you're beating the stringer down and getting rid of the stringer, you could be destroying the oxide. What's the other bad thing? Well, your over etch may not be perfectly anisotropic. It may be etching sideways, a little bit on the gate. So unless it's perfectly anisotropic, you'll have that. So there's always a balance with over etch. So this is why you will find people say, if you want to get the finest patterns, your wafers should be completely flat. You should have no topography because if you have a completely flat wafer, you never have to worry about stringers. You etch down. You stop on the layer below. And you stop. Period. Your over etch in that case is only dictated by the non-uniformity in the film across the wafer, as we talked about before around uniforming etch rate. Now we have another requirement on over etch. I have to not only take into account non-uniformity across the wafer, but if I have topography and I have an anisotropic etch, I have to get rid of stringers. So you will see the topography on modern devices, as they get smaller and smaller, everything has to be smoother and smoother. That's why people went away from locos. Locos creates this big hump on the way. People don't like that. We do shallow trench, which creates a perfectly flat wafer. That way I can pattern a very, very fine line and I don't have to worry about stringers. Slide 16 is from a different text. It has another example of stringer formation. And maybe this cross-section might be easier for you to see. So here's an example where I have a field oxide. So the field oxide has a certain step height to it, shown here. And I have-- and of course, when you deposit the polysilicon, it gets deposited everywhere on the wafer. So this cross-hatched region is the poly. Then I want a pattern. And this is the photoresist after it's been developed and patterned. OK. Now, step B here, I put it into the etcher and I use the photoresist as a mask to etch the poly. So this looks good. I have very nice sidewalls and I stop on the oxide. But what happens? Well, in this region right here, look at the thickness of this gate material where it goes over the step. The thickness from this point of the top to the very bottom is almost twice what the thickness is in the flat region. So as a result, once I've cleared the flats, I still have this little notch of material that's un etched that's sitting there because my etch only etches vertically. If the etch had etched sideways, then you wouldn't have to worry about that notch. So an isotropic etch doesn't tend to give you stringers. Stringer formation is not such a big deal. But if it's anisotropic, it is. So what we do is you continue in step C with an over etch. You continue to beat down on this, and you introduce an extra 50% of an over etch to get rid of the stringers. If you leave the stringers in place, you'll have shorts on your devices because then you have this poly going all the way around the field. You can short the source drain to the gate, and you can get all kinds of problems. Now, this is drawn by a cartoon, or by an illustrator, so it's very easy. It doesn't show any detrimental effect of the over etch. In practice, over etching to remove stringers almost always has a little detrimental effect. And so the name of the game is trying to control your topography so you minimize your over etch so you don't have to destroy the structure by over etching too much. So let's look at slide 17. This is an example on that paper, that article from IBM, on the effects of topography on gate conductor yield with respect to shorting. As I mentioned, if you end up with stringers, the gate can be shorted to the source and drain. So here's an example. And they etched here 21 wafers or so-- 20 wafers. And the percent yield. So if you have a high yield, it means you don't have gate to source drain shorts. If you have a low yield, it means you have shorts between the gate and the source drain. So this wafer had no topography. So they etched the gates on perfectly flat wafers. And the yield was very high. It was like 90% for these five wafers. Now they introduced-- in the field oxide or somewhere, they introduced a 50-nanometer step. You can see the yield goes down a little bit. Not too bad, though. So they have more shorts. And here if you introduce a 100-nanometer step on this particular gate etch, you get a yield of only 30%. So 70% of devices are shorted. So this means you're getting a lot of stringers forming. So you can look for stringers visually in the microscope with a TEM or an SEM, or you can look at them with special test structures where you look at your shorting and your yield. So this is an example where you would say for this particular gate etch process that the 100-nanometer step is too much. You need to reduce your step height and not produce such an abrupt step. Otherwise you need to increase your over etch time or do something. So stringers are really a big issue, and it's always a trade-off between their removal and messing up the structure. Here on slide 18, I just showed you some-- I've shown some ways where we can calculate over etched requirements based on the topography one might have during gate etching. So here's an example on this particular figure where I'm showing you a field oxide. Remember we said the field oxide might have a certain height, which we're calling T sub FOX, the thickness of the field oxide. And it has a certain angle that it's been etched. So it's not perfectly straight up and down. It has a certain angle, theta, that it makes with respect to the substrate. So that's your field oxide. And this black region on top is meant to represent the polysilicon. So remember, polysilicon is always deposited everywhere across the chip. So the black region is the thickness of the poly. Now, you notice that on the sloped portion, the thickness of the poly from here to here is given by T poly times the cosine theta. So it's thicker here than it is here. It's just by geometry. So the thicker portion of the poly here means that you need to increase in order to get clear all the poly. And to avoid forming a stringer down here, you need to increase the etch time. And in fact, this formula gives you here, for this particular topography, gives you an example calculation of the over etch here in percent that you would need for this topography. And it's 1 over cosine theta, that quantity, minus 1, times 100%. So if you have a sidewall angle of 45 degrees-- so this particular example says 45 degrees-- just because of this, we need about a 40% poly over etch time to avoid stringers. So what do people do? Well, either you have no topography, you keep it very flat, you bury your field oxide, and that's called a shallow trench. Or if you are going to have a little bit of topography, you make it-- you kind of ramp the angles and you make very shallow ramps. You don't have anything really abrupt. So depending on this angle, you have to adjust your over etch time. Otherwise you'll end up with stringers. Sometimes, by the way, here on slide 19, sometimes I've been calling these things stringers. Stringers is the derogatory term. Stringer usually means something that was left behind that you didn't want there. Sometimes a stringer is something we want. In fact, when we want it, we give it a positive name. We call it a sidewall spacer. So we often use anisotropic etches without an over etch to get a sidewall spacer intentionally. Here's an example of an intentional production of a stringer. We have a substrate. We put one film on. It could be polysilicon that we have now patterned with a very sharp step. We put a second film on where we adjust the height-- or the thickness, I should say-- of the second film to be about, oh, just the right thickness so that after I've done a purely vertical or purely anisotropic etch, when I'm done, I have a sidewall spacer forming. So this material is left behind. The nice thing about it is self-aligned. So I didn't have to put another mask down and do photolithography to pattern this. All you have to do is have a step in your film, number one. You put another film down. You clear the whole thing with a vertical etch. And bingo. Right on the sidewall, self-aligned to it, is a spacer layer. In fact, that's exactly how spacer layers are formed in modern MOSFETs. The poly gate is etched vertically. A certain thickness of nitride or oxide is put down. It's etched anisotropically, and you end up with a sidewall spacer. So stringers, so to speak, are not necessarily bad. Usually the word stringer means something left behind that you don't want. But if you do want it, you call it a sidewall spacer. And the width of the spacer, which is from this point here to that point there that's sort of the lateral dimension, is a function of how long you did etch and how much over etch you did, as well as the thickness of film two relative to the thickness of film one in this example. So you adjust all of those to create different width spacers. That's a very critical process in modern technology. Now, this slide-- actually, I apologize-- is not in your handouts. It's on page 20. There's two slides here. The next three slides are not in your handout. This one is. It's the next two after that are not. And if you need the up-to-date handouts, they're all posted on the web. So you can get the PDFs yourself. I wanted to list here on slide 20 a couple of new issues for gate etching and patterning that have come up in the last, say, three to five years that you're going to hear more and more about. In the year 2000, this is what people were concerned about. They were concerned about etching poly, making it vertical, maintaining a critical dimension of, say, 0.18 nanometers-- 0.18 microns, rather. And they were etching through only polysilicon and putting a silicide strap on top after the gate etch had been completed. So that was what should call the more traditional or conventional technology. What are people doing these days? Well, people these days are not just doing this. For one thing, the gates are a lot narrower. Look at today's gates. People are, instead of doing something like 200 nanometers, we're down to 70 or 80 nanometers. So the dimensions have shrunk quite a bit. And people are doing what they call fully silicided gates. So instead of just reacting the polysilicon just at the very top of the gate, people are actually putting down enough metal and reacting it at the right temperature that the metal reacts all the way to the very bottom. In this case, on the bottom would be nickel silicide. This was formed by depositing nickel and reacting it at 450 all the way to the bottom. So there's no polysilicon left because these days, people don't want polysilicon gates. Polysilicon is a great material to etch. It's highly compatible chemically with oxide. But it has gate depletion problems. It doesn't have enough carriers in it. Polysilicon, what's the highest you can dope polysilicon? Well, maybe 1 to 5 times 10 to the 20. So that limits your resistivity and it limits your gate depletion. People want to use metals. What's the concentration of electrons in a metal? Well, 10 to the 22 or 10 to the 23. It's huge. It's three orders of magnitude higher. So we need more electrons or more holes, more carriers in our gates these days. The way we do it is we take poly and you convert it completely to silicide. This is called a fully suicided process. This is only in research. There's a lot of issues with doing this. Here's an example where they tried to do the siliciding with cobalt in the upper micrograph, upper SCM micrograph here. It didn't work too well. As you can see, the cobalt went down in. It didn't make it to the bottom. In fact, what happens the silicon got sucked up. And the next time we'll talk about silicide reactions. The silicon was actually moving, and you ended up with a big void. So you had no gate in this example. So this particular paper was just showing-- I took from a few months ago at the VLSI conference in Hawaii. Nickel silicide works much better for fully siliciding than does cobalt silicide, which leaves behind voids. So there's is an example. Not only are we etching polysilicon gates these days. We may end up siliciding them completely. And that's what people are considering. There's another type of process which I'm picturing here on the lower left called damascene replacement gate process. And here's an example of a gate where I have ti-nitride. So this material-- well, it's got tungsten on the top. That's the bright material. And this sort of gray material is a thin layer of ti-nitride. And the gate dielectric is hafnium dioxide, which is down here. This is on a silicon germanium type of high mobility transistor. That was also published about six months ago. Doesn't look anything like a polysilicon gate. So not only are we fully siliciding the gates, but sometimes we're actually replacing the polysilicon with another material, with a metal material. And ti-nitride is a very popular one. So gate etching is evolving from not only having to etch poly and stop on dielectric, but now we may have to etch metals. So lots of new technologies are being developed over the last two or three years and will continue to be developed to push the technology forward. So it's these next two slides, I apologize, not in your handouts, 21 and 22. But they are on the website, so you can download the PDF file. How did they make-- I just wanted to show you this lower left. How does one make a gate that looks anything like this, just to give you an idea? Well, this is the way you do it. Again, this particular example I took from IEDM of 2002. People use the ordinary process to make a polysilicon gate. So you put down poly. You etch it. You make your sidewall spacers. So that's just like what we've learned about in this course. And here is a very short channel MOSFET. Here's a very long channel MOSFET. Now, at the end of defining your MOSFETs, you put down this gray material, and you put it everywhere, very thick. So you put down a dielectric. And then you CNP it down. So you have dielectric that's been CNP'd and flattened. It's been flattened down just so it stops, the polishing stops, just around this height. Now what you do is the polysilicon is then chemically etched and removed. So you etch out with an etchant that digs out the poly. And if you want, you can even dig out the gate oxide and replace it with a high K. So this is what's called a replacement gate process. So actually, I use poly. To etch your poly, you put down sidewall spacers. You go through the high-temperature anneal. Now, why would anyone want to do this? You put down poly, you pattern it, and then later on you're going to remove it. Sounds kind of crazy. Why didn't you just put the metal down to begin with? But does anybody have any ideas on why you might want to use poly and then stick the metal in at the very end? Well, there's a couple of reasons. One reason is we know how to etch poly perfectly. We spent the last 20 years getting perfectly vertical sidewalls. So people know how to etch it. That's one reason. The second reason is poly can go up to very high temperatures. What do I have to do in this process? After you put the gate down, you have to implant the source drains, and you have to anneal them, typically at 1,000 degrees these days for 10 seconds, or 1,050. There aren't too many metals you can pop up to 1,050 and not either melt them-- most metals will melt-- or if you don't melt them, they could undergo dramatic phase transformations. Some of them could diffuse into the wafer and cause problems with deep levels. So metals are not considered very good to have at high temperatures. So this is a way of using poly as a dummy. We know exactly how to pattern it. The only problem with it, it just doesn't have enough carriers. Well, we'll put it in the process. We won't change the process till the very end. We'll etch it out. I don't know how manufacturable-- I don't believe anyone is manufacturing products. This was just published in research about two years ago. It is kind of a neat idea. So after you remove this and you open this hole, well, then you have an open hole. You can put down your high K, your hafnium dioxide. You can put down your ti-nitride by sputtering, and a tungsten cap, and then maybe some capping layer of dielectric and CNP the whole thing down. So that's how they got-- in that last picture, that's how they got that replacement gate. And in fact, slide 22 was also not in your handout. But this is what it looks like in TDM a cross-section in a 50-nanometer MOSFET where they remove the poly, they dug it out, they put in this very dark black layer of hafnium dioxide. So that is your gate dielectric. The ti-nitride is next, a very thin layer. That is your gate material. They then have tungsten that they fill it up with. So they have the tungsten material, which is a little bit more environmentally friendly. Or it's an easier material to work with to fill up the plug. So that's an example where the poly was completely removed. So new types of things have to be done with forming gates these days. OK, let me move on here. Now on slide 23, I'll move away from the gate etching and talk a little bit, spend a few minutes, about modeling and simulation. There's a lot of similarity between the models that we introduced for deposition in the last couple of lectures that were in chapter 9 and the etching models. And why is that? Well, both deposition and etching use incoming chemical or neutral species and ion fluxes. And they have a lot of very similar processes. The difference is when you're depositing, the species come down. They move around and they stick, and they react in some way that deposit. When you're etching, they come down, they move around. They have a chemical reaction, but they actually remove material. So one can be considered mathematically the inverse of the other. But because in math, you can just change the sign from positive to negative in a computer program, you can use very much similar methodologies to model etching. So here's an example of things that might take place. During etching you have ionic species coming down. You have things coming down and being sputtered away. And you may have things coming down and being emitted or desorbed. So a lot of these fluxes, the whole flux equation, is very much analogous between deposition and etching. Here on slide 24, just like in the case of deposition, the etch rate is proportional to the net flux arriving at a point. Remember we said the deposition was proportional to the net flux, the flux in minus the flux out? Well, the etch rate is often proportional to the net flux. So we have the same kind of situation. You have a wafer, which has a certain patterned surface. You form a virtual dashed plane above it, and you look at the distribution, the angular distribution of the species arriving just above the surface on that dashed plane. If you have a chemical etching species that is something that's neutral that is not ionized, you assume you have an isotropic arrival distribution. So you put a cosine theta to the n, where n equals 1, for a chemical species. If you have an ionic species, it can be accelerated towards the surface. It can arrive very much with a vertical orientation. And here we prefer etching-- we put in cosine theta to the n where n could vary between 10 and 100, or something like that. Remember, the higher the value of n, much more vertical the distribution. In the deposition case, remember we had the concept of a sticking coefficient where we use very much that same concept. When we're etching we say an ionized species usually sticks where it lands and it's going to react and etch, while a reactive neutral species may have a low SC value. It might bounce around before it finally reacts and etches off the surface. As far as the physical component of etching like sputtering, the sputtering yield has the same angular dependence that we used in the deposition case. So again, very similar process going on when we include sputtering. So in slide 25, I'm showing what's called the linear etch model. There have been some very specific models developed. Here I just want to give a general purpose etch model, a couple of them, which can be applied broadly. The so-called linear model assumes that there are chemical and physical components that act independently. So they're not linked. They're independent of each other. And in this case, we have an etch rate that is given by just a sum of two terms, one term that has to do with the chemical flux, F sub C. That's the chemical flux onto the surface at any point. And another term that is proportional to F sub I, the ionic flux. In front of each one of these fluxes is a K factor, KI and KF. These are relative rate constants for the two components. Usually you use these as fitting parameters. If I have a large KI, I have a lot of ionic component, whereas if I have a large KF, there'll be more of a chemical component. What is that-- the physical component can be due to purely physical sputtering, for example, like we talked about before. Or it actually can be some ion enhanced mechanism in the regime where the chemical flux is not limiting the ion etching. So this is a linear etch model with the independent, chemical, and physical components. Here's just an example of some simulations. Remember, speedy SM was a topography simulator, does both deposition and etching. Here on slide 26., I have some speedy simulations using the linear etch model. And we have A, B, and C. Well, what we're doing-- each one of these contours, again, represents the shape of the etched surface at a given time. So we're looking at snapshots in time. Here I have an etched mask that looks like this. So this is my masking material on the surface. And I'm etching into the silicon, or into the film below. Here I have n equals 1 for my chemical flux angular distribution. So it's going to give you fairly isotropic type of etches here. Etching laterally, this dimension, just about the same as you etch vertically. So there's an isotropic type of etch. Here's where you have n equals 80 for the ionic flux angular distribution. And you end up with very vertical etching. Basically everything's coming straight down. And etching, there's not much lateral etching. So this is all chemical etching. The ion flux is zero. All physical etching in this case. And here's a 50/50. I have half chemical and half physical fluxes of both. And you can see it etches a little bit laterally and a certain amount vertically. So you can generate a variety of etch profiles using a relatively simple model with only maybe three or four parameters. And then you can fit this to what you actually see in your actual etches. The second type of model, which is quite different that's shown here in slide 27, is called a saturation dash absorption type of model because the fact-- very often, this is used when the chemical or the neutral etching and the ion etching or the physical etch components are coupled. So when one of these affects the other, when you have a synergistic effect, which we know is often the case, especially when you have side wall inhibitor layers or something, you need to use this type of model. So here's an example. Let's say you need an ion flux to remove an inhibitor layer that is formed by chemical etching. So instead of a linear model where these two just added up linearly, these add sort of resistors in parallel, so to speak, or capacitors in series if you're electrical engineer. The etch rate is just given by the inverse, 1 over, the sum of 1 over KI FI plus 1 over SC FC. So there's 1 over a chemical flux and 1 over an ionic flux. If you plot this equation, what does it look like? Here's the overall etch rate as a function of-- now I'm plotting as a function of the ion flux times K sub I. So it's a function of this quantity. So it goes up. And the parameter here is the chemical flux, SC. So if either flux is 0, so if SC is 0, I get 0 etch rate, regardless of what my ion flux is. You need both the chemical and the ionic flux. So if either flux is zero, the overall trade is 0 because both of them are required to etch the material. And you end up with these sort of saturating type of curves. Here's for a chemical flux of 0.5. As I increase the ion flux, you increase the etch rate until at some point it sort of saturates and it's no longer a function of ion flux. And it's limited, then, by the chemical process. So you have a series of desaturating curves. It sort of looks like transistor characteristics because you have two things controlling it, both the chemical flux and the ionic flux. So basically, this is that same plot I just showed. The etch rate tends to saturate when one component gets too large relative to the other. So you're always rate limited by the slower of the two series processes, either the chemical etch rate or the ionic etch rate. This is a very generalized approach. Again, it can be broadly applicable, and you can use it to fit a lot of different etch models, again, with relatively small number of parameters, maybe three to four parameters. . On slide 29, I'm showing a speedy simulation, again, where we have equal chemical and ionic components. And you notice what you get is reasonably anisotropic etching. There's a little bit of lateral etching here down at the bottom. And why is that? Well, because in order to continue etching, you need an ion flux to remove the inhibitor layer or whatever it is. And that ion flux tends to be very vertical because we have a large n value in this case. And so the side wall inhibitor is not removed. The inhibitor layer is not removed on the side walls, but it's only removed on the flat surfaces. If you change this n direction, you'll start to see a little more lateral etching-- if you change the n number, rather, which the n will determine the angle of the cosine theta to the n dependence. And then you'll get a little bit more lateral etching. So there's an example for equal chemical and ion components. OK, so I just want to summarize what we've done in chapter 10 on plasma etching. There's a lot of important issues. Some of the key ones we've covered are the selectivity with respect to the layer below you, plus the selectivity with the layer above you, or the mask. The directionality of the etch. How much do you etch vertically versus laterally? And the shape of the profile on the sidewall. There tend to be two components of etching. There's a chemical one that is reactive neutral species or free radicals. There's a physical component, which is just due to the ions being accelerated towards the surface and reacting or sputtering off components from the wafer. These can be completely independent. And then we use the linear model, or they can act in a series in a synergistic manner in what we would use the last model that we showed. Topography on the wafer can really increase the selectivity requirements. It makes the patterning of fine features and stopping on thin layers very difficult. Of course, lithography is impacted, as well, by the presence of topography because of the depth of focus. But all modern processes seek to minimize topography. So people have moved away from field oxides that go up or that go down. We want fill oxide that's completely flat with the surface so you see shallow trench isolation. Special high-density plasma etchers have been developed to use chemicals like HBR, chlorine and oxygen combined together, specifically for gate etching to control the selectivity so you can stop on the gate dielectric and to control the shape of the sidewall profile. At the same time, you minimize damage to the underlying structures and the underlying gate oxide. The models that I've talked about here for etching are fairly what I would call empirical. If you want more predictive or more physically accurate models of plasma etching-- Professor Sarwin used to teach a class. I'm not sure he's presently teaching it-- really on all the details of plasma etching as an entire field in and of itself. We've only been able to give it two lectures and one chapter of your textbook. So these models, while helpful to people that we have now, people to use topography simulators like speedy, it's not going to give you the same level of chemical understanding if you had a full-blown model of the etching. Again, it's fairly empirical. We're fitting curves in speedy. And we're fitting the shape. But it gives you some idea of what's going on. If you want more details, there are other more detailed classes you can take. OK, before we finish up, let me just remind people if you came in late that we have set the schedule. After today is Thanksgiving. That's a holiday. There's no class. Next week we'll talk about silicides and novel gate materials. After that we'll talk about strained silicon. And then you guys start talking. I've picked the first four speakers here for the reports. You have to speak for 16 minutes, and then I have three minutes for questions and answers on the seventh and on the ninth. These are the people speaking on the ninth. If you're doing a written report, you must bring it to class on Thursday, December 9. That's the last you can hand it in. And this will be posted, this list, on the website. OK, thanks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.